Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

docs: updated the instructions, aligned the commands & deleted deprecated docs #481

Merged
merged 4 commits into from
Aug 1, 2024
Merged
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions docs/main/glossary.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,6 +21,7 @@ description: This section lists the abbreviations used thorughout the OpenEBS do
| EKS | Elastic Kubernetes Service |
| FIO | Flexible IO Tester |
| FSB | File System Backup |
| GCP | Google Cloud Platform |
| GCS | Google Cloud Storage |
| GKE | Google Kubernetes Engine |
| HA | High Availability |
Expand Down
2 changes: 1 addition & 1 deletion docs/main/quickstart-guide/deploy-a-test-application.md
Original file line number Diff line number Diff line change
Expand Up @@ -84,7 +84,7 @@ The next step is to create a PersistentVolumeClaim. Pods will use PersistentVolu
```

:::note
As the Local PV storage classes use `waitForFirstConsumer`, do not use `nodeName` in the Pod spec to specify node affinity. If `nodeName` is used in the Pod spec, then PVC will remain in `pending` state. Refer the issue [#2915](https://github.com/openebs/openebs/issues/2915) for more details.
As the Local PV storage classes use `waitForFirstConsumer`, do not use `nodeName` in the Pod spec to specify node affinity. If `nodeName` is used in the Pod spec, then PVC will remain in `pending` state. Refer to the issue [#2915](https://github.com/openebs/openebs/issues/2915) for more details.
:::

2. Create the Pod:
Expand Down
2 changes: 1 addition & 1 deletion docs/main/releases.md
Original file line number Diff line number Diff line change
Expand Up @@ -81,7 +81,7 @@ Earlier, the scale of volume was not allowed when the volume already has a snaps
### Watch Items and Known Issues - Local Storage

Local PV ZFS / Local PV LVM on a single worker node encounters issues after upgrading to the latest versions. The issue is specifically associated with the change of the controller manifest to a Deployment type, which results in the failure of new controller pods to join the Running state. The issue appears to be due to the affinity rules set in the old pod, which are not present in the new pods. As a result, since both the old and new pods have relevant labels, the scheduler cannot place the new pod on the same node, leading to scheduling failures when there's only a single node.
The workaround is to delete the old pod so the new pod can get scheduled. Refer the issue [#3741](https://github.com/openebs/openebs/issues/3751) for more details.
The workaround is to delete the old pod so the new pod can get scheduled. Refer to the issue [#3741](https://github.com/openebs/openebs/issues/3751) for more details.

### Watch Items and Known Issues - Replicated Storage

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -13,9 +13,9 @@ This documentation outlines the process of migrating application volumes from CS
**Velero Support**: Velero supports the backup and restoration of Kubernetes volumes attached to pods through File System Backup (FSB) or Pod Volume Backup. This process involves using modules from popular open-source backup tools like Restic (which we will utilize).

- For **cloud provider plugins**, see the [Velero Docs - Providers section](https://velero.io/docs/main/supported-providers/).
- **Velero GKE Configuration (Prerequisites)**: You can find the prerequisites and configuration details for Velero in a Google Kubernetes Engine (GKE) environment on the GitHub [here](https://github.com/vmware-tanzu/velero-plugin-for-gcp#setup).
- **Object Storage Requirement**: To store backups, Velero necessitates an object storage bucket. In our case, we utilize a Google Cloud Storage (GCS) bucket. Configuration details and setup can be found on the GitHub [here](https://github.com/vmware-tanzu/velero-plugin-for-gcp#setup).
- **Velero Basic Installation**: For a step-by-step guide on the basic installation of Velero, see the [Velero Docs - Basic Install section](https://velero.io/docs/v1.11/basic-install/).
- **Velero GKE Configuration (Prerequisites)**: Refer [Velero plugin for Google Cloud Platform (GCP)](https://github.com/vmware-tanzu/velero-plugin-for-gcp#setup) to view the prerequisites and configuration details for Velero in a Google Kubernetes Engine (GKE) environment.
- **Object Storage Requirement**: To store backups, Velero necessitates an object storage bucket. In our case, we utilize a Google Cloud Storage (GCS) bucket. Refer [Velero plugin for GCP](https://github.com/vmware-tanzu/velero-plugin-for-gcp#setup) to view the setup and configuration details.
balaharish7 marked this conversation as resolved.
Show resolved Hide resolved
- **Velero Basic Installation**: Refer to the [Velero Documentation - Basic Install section](https://velero.io/docs/v1.11/basic-install/) for a step-by-step guide on the basic installation of Velero.

## See Also

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -125,7 +125,7 @@ The provisioner name for ZFS driver is "zfs.csi.openebs.io", we have to use this

**Scheduler**

The ZFS driver has its own scheduler which will try to distribute the PV across the nodes so that one node should not be loaded with all the volumes. Currently the driver supports two scheduling algorithms: VolumeWeighted and CapacityWeighted, in which it will try to find a ZFS pool which has less number of volumes provisioned in it or less capacity of volume provisioned out of a pool respectively, from all the nodes where the ZFS pools are available. Refer [StorageClass With k8s Scheduler](https://github.com/openebs/zfs-localpv/blob/HEAD/docs/storageclasses.md#storageclass-with-k8s-scheduler) to know about how to select scheduler via storage-class.
The ZFS driver has its own scheduler which will try to distribute the PV across the nodes so that one node should not be loaded with all the volumes. Currently the driver supports two scheduling algorithms: VolumeWeighted and CapacityWeighted, in which it will try to find a ZFS pool which has less number of volumes provisioned in it or less capacity of volume provisioned out of a pool respectively, from all the nodes where the ZFS pools are available. Refer [StorageClass With K8s Scheduler](https://github.com/openebs/zfs-localpv/blob/HEAD/docs/storageclasses.md#storageclass-with-k8s-scheduler) to learn how to select a scheduler via storage class.

Once it can find the node, it will create a PV for that node and also create a ZFSVolume custom resource for the volume with the NODE information. The watcher for this ZFSVolume CR will get all the information for this object and creates a ZFS dataset (zvol) with the given ZFS property on the mentioned node.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,7 @@ If you have the disk (say /dev/sdb), then you can use the below command to creat
$ zpool create zfspv-pool /dev/sdb
```

You can also create mirror or raidz pool as per your need. Refer [Local PV ZFS](https://github.com/openzfs/zfs) for more information.
You can also create mirror or raidz pool as per your need. Refer to the [OpenZFS Documentation](https://openzfs.github.io/openzfs-docs/) for more details.

If you do not have the disk, then you can create the zpool on the loopback device which is backed by a sparse file. Use this for testing purpose only.

Expand Down
Loading