From a02c51fa3b1afdd0fcfd46c0f621d56b9f077685 Mon Sep 17 00:00:00 2001 From: Bala Harish <161304963+balaharish7@users.noreply.github.com> Date: Wed, 24 Jul 2024 14:09:28 +0530 Subject: [PATCH 1/4] docs: updated the instructions and aligned the commands Signed-off-by: Bala Harish <161304963+balaharish7@users.noreply.github.com> --- .../openebs-on-kubernetes-platforms/gke.md | 12 +- .../microkubernetes.md | 2 +- .../openebs-on-kubernetes-platforms/talos.md | 4 +- .../Solutioning/read-write-many/nfspvc.md | 8 +- docs/main/faqs/faqs.md | 2 +- .../deploy-a-test-application.md | 8 +- docs/main/quickstart-guide/installation.md | 6 +- docs/main/releases.md | 2 +- docs/main/troubleshooting/install.md | 135 ---- docs/main/troubleshooting/localpv.md | 150 ----- docs/main/troubleshooting/mayastor.md | 15 - .../troubleshooting-replicated-storage.md | 2 +- docs/main/troubleshooting/uninstall.md | 52 -- .../troubleshooting/volume-provisioning.md | 543 --------------- .../additional-information/alphafeatures.md | 8 +- .../local-pv-hostpath/hostpath-deployment.md | 2 +- .../hostpath-installation.md | 2 +- .../local-pv-lvm/lvm-configuration.md | 28 +- .../local-pv-lvm/lvm-deployment.md | 12 +- .../local-pv-lvm/lvm-installation.md | 6 +- .../advanced-operations/zfs-backup-restore.md | 4 +- .../local-pv-zfs/zfs-configuration.md | 10 +- .../local-pv-zfs/zfs-installation.md | 4 +- .../localpv-hostpath.md | 151 ----- .../local-storage-user-guide/lvm-localpv.md | 167 ----- .../local-storage-user-guide/zfs-localpv.md | 364 ---------- docs/main/user-guides/localpv-device.md | 624 ------------------ docs/main/user-guides/mayastor.md | 33 - .../additional-information/migrate-etcd.md | 2 +- .../additional-information/scale-etcd.md | 2 +- .../advanced-operations/snapshot.md | 8 +- .../advanced-operations/supportability.md | 2 +- .../rs-configuration.md | 4 +- .../replicated-pv-mayastor/rs-deployment.md | 2 +- .../replicated-pv-mayastor/rs-installation.md | 2 +- docs/main/user-guides/uninstallation.md | 4 +- docs/main/user-guides/upgrades.md | 2 +- 37 files changed, 80 insertions(+), 2304 deletions(-) delete mode 100644 docs/main/troubleshooting/install.md delete mode 100644 docs/main/troubleshooting/localpv.md delete mode 100644 docs/main/troubleshooting/mayastor.md delete mode 100644 docs/main/troubleshooting/uninstall.md delete mode 100644 docs/main/troubleshooting/volume-provisioning.md delete mode 100644 docs/main/user-guides/local-storage-user-guide/localpv-hostpath.md delete mode 100644 docs/main/user-guides/local-storage-user-guide/lvm-localpv.md delete mode 100644 docs/main/user-guides/local-storage-user-guide/zfs-localpv.md delete mode 100644 docs/main/user-guides/localpv-device.md delete mode 100644 docs/main/user-guides/mayastor.md diff --git a/docs/main/Solutioning/openebs-on-kubernetes-platforms/gke.md b/docs/main/Solutioning/openebs-on-kubernetes-platforms/gke.md index afc77247f..6ce1bb39f 100644 --- a/docs/main/Solutioning/openebs-on-kubernetes-platforms/gke.md +++ b/docs/main/Solutioning/openebs-on-kubernetes-platforms/gke.md @@ -34,7 +34,7 @@ Using OpenEBS for GKE with Local SSDs offers several benefits, particularly in m - Adding additional disks to existing node pool is not supported. -- Each Local SSD disk comes in a fixed size and you can attach multiple Local SSD disks to a single VM when you create it. The number of Local SSD disks that you can attach to a VM depends on the VM's machine type. See the [Local SSD Disks documentation](https://cloud.google.com/compute/docs/disks/local-ssd#choose_number_local_ssds) for more information. +- Each Local SSD disk comes in a fixed size and you can attach multiple Local SSD disks to a single VM when you create it. The number of Local SSD disks that you can attach to a VM depends on the VM's machine type. Refer to the [Local SSD Disks documentation](https://cloud.google.com/compute/docs/disks/local-ssd#choose_number_local_ssds) for more information. ::: ## Prerequisites @@ -64,7 +64,7 @@ Before installing Replicated PV Mayastor, make sure that you meet the following - **Enable Huge Pages** 2MiB-sized Huge Pages must be supported and enabled on the storage nodes i.e. nodes where IO engine pods are deployed. A minimum number of 1024 such pages (i.e. 2GiB total) must be available exclusively to the IO engine pod on each node. - Secure Socket Shell (SSH) to the GKE worker node to enable huge pages. See [here](https://cloud.google.com/kubernetes-engine/distributed-cloud/vmware/docs/how-to/ssh-cluster-node) for more details. + Secure Socket Shell (SSH) to the GKE worker node to enable huge pages. Refer to the [Manage Cluster Nodes documentation](https://cloud.google.com/kubernetes-engine/distributed-cloud/vmware/docs/how-to/ssh-cluster-node) for more details. - **Kernel Modules** @@ -76,7 +76,7 @@ Before installing Replicated PV Mayastor, make sure that you meet the following - **Preparing the Cluster** - See the [Replicated PV Mayastor Installation documentation](../rs-installation.md#preparing-the-cluster) for instructions on preparing the cluster. + Refer to the [Replicated PV Mayastor Installation documentation](../rs-installation.md#preparing-the-cluster) for instructions on preparing the cluster. - **ETCD and LOKI Storage Class** @@ -84,7 +84,7 @@ Before installing Replicated PV Mayastor, make sure that you meet the following ## Install Replicated PV Mayastor on GKE -See the [Installing OpenEBS documentation](../../../../quickstart-guide/installation.md#installation-via-helm) to install Replicated PV Mayastor using Helm. +Refer to the [OpenEBS Installation documentation](../../../../quickstart-guide/installation.md#installation-via-helm) to install Replicated PV Mayastor using Helm. - **Helm Install Command** @@ -214,9 +214,9 @@ diskpool.openebs.io/pool-1 created ## Configuration -- See the [Replicated PV Mayastor Configuration documentation](../rs-configuration.md#create-replicated-pv-mayastor-storageclasss) for instructions regarding StorageClass creation. +- Refer to the [Replicated PV Mayastor Configuration documentation](../rs-configuration.md#create-replicated-pv-mayastor-storageclasss) for instructions regarding StorageClass creation. -- See [Deploy an Application documentation](../rs-deployment.md) for instructions regarding PVC creation and deploying an application. +- Refer to the [Deploy an Application documentation](../rs-deployment.md) for instructions regarding PVC creation and deploying an application. ## Node Failure Scenario diff --git a/docs/main/Solutioning/openebs-on-kubernetes-platforms/microkubernetes.md b/docs/main/Solutioning/openebs-on-kubernetes-platforms/microkubernetes.md index 8edca1505..2cbff7090 100644 --- a/docs/main/Solutioning/openebs-on-kubernetes-platforms/microkubernetes.md +++ b/docs/main/Solutioning/openebs-on-kubernetes-platforms/microkubernetes.md @@ -73,7 +73,7 @@ microk8s kubectl patch felixconfigurations default --patch '{"spec":{"featureDet > For more details about this issue, refer to the [GitHub issue](https://github.com/canonical/microk8s/issues/3695). :::info -Refer to the [Replicated PV Mayastor Configuration](../replicated-pv-mayastor/rs-configuration.md) for further **Configuration of Replicated PV Mayastor** including storage pools, storage class, persistent volume claims, and application setup. +Refer to the [Replicated PV Mayastor Configuration documentation](../replicated-pv-mayastor/rs-configuration.md) for further **Configuration of Replicated PV Mayastor** including storage pools, storage class, persistent volume claims, and application setup. ::: ## See Also diff --git a/docs/main/Solutioning/openebs-on-kubernetes-platforms/talos.md b/docs/main/Solutioning/openebs-on-kubernetes-platforms/talos.md index 209172c25..f44da90e7 100644 --- a/docs/main/Solutioning/openebs-on-kubernetes-platforms/talos.md +++ b/docs/main/Solutioning/openebs-on-kubernetes-platforms/talos.md @@ -18,7 +18,7 @@ All the below configurations can be configured either during initial cluster cre ### Pod Security -By default, Talos Linux applies a baseline pod security profile across namespaces except for the kube-system namespace. This default setting restricts Replicated PV Mayastors’s ability to manage and access system resources. You need to add the exemptions for Replicated PV Mayastor namespace. See the [Talos Documentation](https://www.talos.dev/v1.6/kubernetes-guides/configuration/pod-security/) for detailed instructions on Pod Security. +By default, Talos Linux applies a baseline pod security profile across namespaces except for the kube-system namespace. This default setting restricts Replicated PV Mayastors’s ability to manage and access system resources. You need to add the exemptions for Replicated PV Mayastor namespace. Refer to the [Talos Documentation](https://www.talos.dev/v1.6/kubernetes-guides/configuration/pod-security/) for detailed instructions on Pod Security. **Create a file cp.yaml** @@ -109,7 +109,7 @@ talosctl -n service kubelet restart ## Install Replicated PV Mayastor on Talos -To install Replicated PV Mayastor using Helm on Talos, refer to the [installation steps](../../../../quickstart-guide/installation.md#installation-via-helm) in the Quickstart Guide. +Refer to the [OpenEBS Installation documentation](../../../../quickstart-guide/installation.md#installation-via-helm) to install Replicated PV Mayastor using Helm on Talos. ## See Also diff --git a/docs/main/Solutioning/read-write-many/nfspvc.md b/docs/main/Solutioning/read-write-many/nfspvc.md index abf0b5647..88eea3f14 100644 --- a/docs/main/Solutioning/read-write-many/nfspvc.md +++ b/docs/main/Solutioning/read-write-many/nfspvc.md @@ -31,13 +31,13 @@ NFS volumes can be mounted as a `PersistentVolume` in Kubernetes pods. It is als An NFS CSI driver is a specific type of Container Storage Interface (CSI) driver that enables container orchestration systems, like Kubernetes, to manage storage using the NFS. NFS (A distributed file system protocol) allows multiple machines to share directories over a network. The NFS CSI driver facilitates the use of NFS storage by providing the necessary interface for creating, mounting, and managing NFS volumes within a containerized environment, ensuring that applications running in containers can easily access and use NFS-based storage. -CSI plugin name: `nfs.csi.k8s.io`. This driver requires an existing and already configured NFSv3 or NFSv4 server. It supports dynamic provisioning of Persistent Volumes via PVCs by creating a new sub-directory under the NFS server. This can be deployed using Helm. See [NFS CSI driver for Kubernetes](https://github.com/kubernetes-csi/csi-driver-nfs?tab=readme-ov-file#install-driver-on-a-kubernetes-cluster) to install NFS CSI driver on a Kubernetes cluster. +CSI plugin name: `nfs.csi.k8s.io`. This driver requires an existing and already configured NFSv3 or NFSv4 server. It supports dynamic provisioning of Persistent Volumes via PVCs by creating a new sub-directory under the NFS server. This can be deployed using Helm. Refer [NFS CSI driver for Kubernetes](https://github.com/kubernetes-csi/csi-driver-nfs?tab=readme-ov-file#install-driver-on-a-kubernetes-cluster) to install NFS CSI driver on a Kubernetes cluster. ### Replicated PV Mayastor Replicated PV Mayastor is a performance-optimised Container Native Storage (CNS) solution. The goal of OpenEBS is to extend Kubernetes with a declarative data plane, providing flexible persistent storage for stateful applications. -Make sure you have installed Replicated PV Mayastor before proceeding to the next step. See the [Installing OpenEBS documentation](../../../../quickstart-guide/installation.md#installation-via-helm) to install Replicated PV Mayastor using Helm. +Make sure you have installed Replicated PV Mayastor before proceeding to the next step. Refer to the [OpenEBS Installation documentation](../../../../quickstart-guide/installation.md#installation-via-helm) to install Replicated PV Mayastor using Helm. ## Details of Setup @@ -45,7 +45,7 @@ Make sure you have installed Replicated PV Mayastor before proceeding to the nex 1. Create a Replicated PV Mayastor Pool. - Create a Replicated PV Mayastor pool that satisfies the performance and availability requirements. See [Replicated PV Mayastor Configuration documentation](../../user-guides/replicated-storage-user-guide/replicated-pv-mayastor/rs-configuration.md#create-diskpools) for more details. + Create a Replicated PV Mayastor pool that satisfies the performance and availability requirements. Refer to the [Replicated PV Mayastor Configuration documentation](../../user-guides/replicated-storage-user-guide/replicated-pv-mayastor/rs-configuration.md#create-diskpools) for more details. **Example of a Replicated PV Mayastor Pool** @@ -66,7 +66,7 @@ Make sure you have installed Replicated PV Mayastor before proceeding to the nex 2. Create a Replicated PV Mayastor Storage Class. - Create a storage class to point to the above created pool. Also, select the number of replicas and the default size of the volume. See [Replicated PV Mayastor Configuration documentation](../../user-guides/replicated-storage-user-guide/replicated-pv-mayastor/rs-configuration.md#create-replicated-pv-mayastor-storageclasss) for more details. + Create a storage class to point to the above created pool. Also, select the number of replicas and the default size of the volume. Refer to the [Replicated PV Mayastor Configuration documentation](../../user-guides/replicated-storage-user-guide/replicated-pv-mayastor/rs-configuration.md#create-replicated-pv-mayastor-storageclasss) for more details. **Example of a Replicated PV Mayastor Storage Class** diff --git a/docs/main/faqs/faqs.md b/docs/main/faqs/faqs.md index 03cee43db..73b4cedf2 100644 --- a/docs/main/faqs/faqs.md +++ b/docs/main/faqs/faqs.md @@ -433,7 +433,7 @@ Faulted replicas are automatically rebuilt in the background without IO disrupti ### How does OpenEBS provide high availability for stateful workloads? -See [here](../user-guides/replicated-storage-user-guide/rs-configuration.md#stsaffinitygroup) for more information. +Refer to the [Replicated PV Mayastor Configuration documentation](../user-guides/replicated-storage-user-guide/replicated-pv-mayastor/rs-configuration.md#stsaffinitygroup) for more information. [Go to top](#top) diff --git a/docs/main/quickstart-guide/deploy-a-test-application.md b/docs/main/quickstart-guide/deploy-a-test-application.md index 868a9c3c1..53a7f61ed 100644 --- a/docs/main/quickstart-guide/deploy-a-test-application.md +++ b/docs/main/quickstart-guide/deploy-a-test-application.md @@ -9,9 +9,9 @@ description: This section will help you to deploy a test application. --- :::info -- See [Local PV LVM Deployment](../user-guides/local-storage-user-guide/local-pv-lvm/lvm-deployment.md) to deploy Local PV LVM. -- See [Local PV ZFS Deployment](../user-guides/local-storage-user-guide/local-pv-zfs/zfs-deployment.md) to deploy Local PV ZFS. -- See [Replicated PV Mayastor Deployment](../user-guides/replicated-storage-user-guide/replicated-pv-mayastor/rs-deployment.md) to deploy Replicated PV Mayastor. +- Refer to the [Local PV LVM Deployment documentation](../user-guides/local-storage-user-guide/local-pv-lvm/lvm-deployment.md) to deploy Local PV LVM. +- Refer to the [Local PV ZFS Deployment documentation](../user-guides/local-storage-user-guide/local-pv-zfs/zfs-deployment.md) to deploy Local PV ZFS. +- Refer to the [Replicated PV Mayastor Deployment documentation](../user-guides/replicated-storage-user-guide/replicated-pv-mayastor/rs-deployment.md) to deploy Replicated PV Mayastor. ::: # Deploy an Application @@ -84,7 +84,7 @@ The next step is to create a PersistentVolumeClaim. Pods will use PersistentVolu ``` :::note - As the Local PV storage classes use `waitForFirstConsumer`, do not use `nodeName` in the Pod spec to specify node affinity. If `nodeName` is used in the Pod spec, then PVC will remain in `pending` state. See [here](https://github.com/openebs/openebs/issues/2915) for more details. + As the Local PV storage classes use `waitForFirstConsumer`, do not use `nodeName` in the Pod spec to specify node affinity. If `nodeName` is used in the Pod spec, then PVC will remain in `pending` state. Refer the issue [#2915](https://github.com/openebs/openebs/issues/2915) for more details. ::: 2. Create the Pod: diff --git a/docs/main/quickstart-guide/installation.md b/docs/main/quickstart-guide/installation.md index 033e96eb5..bbe3e6ea2 100644 --- a/docs/main/quickstart-guide/installation.md +++ b/docs/main/quickstart-guide/installation.md @@ -1,9 +1,11 @@ --- id: installation -title: Installing OpenEBS +title: OpenEBS Installation keywords: + - OpenEBS Installation - Installing OpenEBS - Installing OpenEBS through helm + - Installation description: This guide will help you to customize and install OpenEBS --- @@ -67,7 +69,7 @@ OpenEBS provides several options to customize during installation such as: - Specifying the nodes on which OpenEBS components should be deployed and so forth. :::info -See [here](https://github.com/openebs/openebs/blob/main/charts/README.md#values) for configurable options. +Refer to the [OpenEBS helm chart](https://github.com/openebs/openebs/blob/main/charts/README.md#values) for configurable options. ::: 2. Install the OpenEBS helm chart with default values. diff --git a/docs/main/releases.md b/docs/main/releases.md index e79557779..6199b33a9 100644 --- a/docs/main/releases.md +++ b/docs/main/releases.md @@ -81,7 +81,7 @@ Earlier, the scale of volume was not allowed when the volume already has a snaps ### Watch Items and Known Issues - Local Storage Local PV ZFS / Local PV LVM on a single worker node encounters issues after upgrading to the latest versions. The issue is specifically associated with the change of the controller manifest to a Deployment type, which results in the failure of new controller pods to join the Running state. The issue appears to be due to the affinity rules set in the old pod, which are not present in the new pods. As a result, since both the old and new pods have relevant labels, the scheduler cannot place the new pod on the same node, leading to scheduling failures when there's only a single node. -The workaround is to delete the old pod so the new pod can get scheduled. See the issue [#3741](https://github.com/openebs/openebs/issues/3751) for more details. +The workaround is to delete the old pod so the new pod can get scheduled. Refer the issue [#3741](https://github.com/openebs/openebs/issues/3751) for more details. ### Watch Items and Known Issues - Replicated Storage diff --git a/docs/main/troubleshooting/install.md b/docs/main/troubleshooting/install.md deleted file mode 100644 index 3eaa24c87..000000000 --- a/docs/main/troubleshooting/install.md +++ /dev/null @@ -1,135 +0,0 @@ ---- -id: install -title: Troubleshooting OpenEBS Install -keywords: - - OpenEBS - - OpenEBS installation - - OpenEBS installation troubleshooting -description: This page contains list of OpenEBS installation related troubleshooting information. ---- - -## General guidelines for troubleshooting - -- Contact [OpenEBS Community](/docs/introduction/community) for support. -- Search for similar issues added in this troubleshooting section. -- Search for any reported issues on [StackOverflow under OpenEBS tag](https://stackoverflow.com/questions/tagged/openebs) - -[Installation failed because insufficient user rights](#install-failed-user-rights) - -[iSCSI client is not setup on Nodes. Application Pod is in ContainerCreating state.](#install-failed-iscsi-not-configured) - -[Why does OpenEBS provisioner pod restart continuously?](#openebs-provisioner-restart-continuously) - -[OpenEBS installation fails on Azure](#install-failed-azure-no-rbac-set). - -[A multipath.conf file claims all SCSI devices in OpenShift](#multipath-conf-claims-all-scsi-devices-openshift) - -### Installation failed because of insufficient user rights {#install-failed-user-rights} - -OpenEBS installation can fail in some cloud platform with the following errors. - -```shell hideCopy -namespace "openebs" created -serviceaccount "openebs-maya-operator" created -clusterrolebinding.rbac.authorization.k8s.io "openebs-maya-operator" created -deployment.apps "maya-apiserver" created -service "maya-apiserver-service" created -deployment.apps "openebs-provisioner" created -deployment.apps "openebs-snapshot-operator" created -configmap "openebs-ndm-config" created -daemonset.extensions "openebs-ndm" created -Error from server (Forbidden): error when creating "https://raw.githubusercontent.com/openebs/openebs/v0.8.x/k8s/openebs-operator.yaml": clusterroles.rbac.authorization.k8s.io "openebs-maya-operator" is forbidden: attempt to grant extra privileges: [{[*] [*] [nodes] [] []} {[*] [*] [nodes/proxy] [] []} {[*] [*] [namespaces] [] []} {[*] [*] [services] [] []} {[*] [*] [pods] [] []} {[*] [*] [deployments] [] []} {[*] [*] [events] [] []} {[*] [*] [endpoints] [] []} {[*] [*] [configmaps] [] []} {[*] [*] [jobs] [] []} {[*] [*] [storageclasses] [] []} {[*] [*] [persistentvolumeclaims] [] []} {[*] [*] [persistentvolumes] [] []} {[get] [volumesnapshot.external-storage.k8s.io] [volumesnapshots] [] []} {[list] [volumesnapshot.external-storage.k8s.io] [volumesnapshots] [] []} {[watch] [volumesnapshot.external-storage.k8s.io] [volumesnapshots] [] []} {[create] [volumesnapshot.external-storage.k8s.io] [volumesnapshots] [] []} {[update] [volumesnapshot.external-storage.k8s.io] [volumesnapshots] [] []} {[patch] [volumesnapshot.external-storage.k8s.io] [volumesnapshots] [] []} {[delete] [volumesnapshot.external-storage.k8s.io] [volumesnapshots] [] []} {[get] [volumesnapshot.external-storage.k8s.io] [volumesnapshotdatas] [] []} {[list] [volumesnapshot.external-storage.k8s.io] [volumesnapshotdatas] [] []} {[watch] [volumesnapshot.external-storage.k8s.io] [volumesnapshotdatas] [] []} {[create] [volumesnapshot.external-storage.k8s.io] [volumesnapshotdatas] [] []} {[update] [volumesnapshot.external-storage.k8s.io] [volumesnapshotdatas] [] []} {[patch] [volumesnapshot.external-storage.k8s.io] [volumesnapshotdatas] [] []} {[delete] [volumesnapshot.external-storage.k8s.io] [volumesnapshotdatas] [] []} {[get] [apiextensions.k8s.io] [customresourcedefinitions] [] []} {[list] [apiextensions.k8s.io] [customresourcedefinitions] [] []} {[create] [apiextensions.k8s.io] [customresourcedefinitions] [] []} {[update] [apiextensions.k8s.io] [customresourcedefinitions] [] []} {[delete] [apiextensions.k8s.io] [customresourcedefinitions] [] []} {[*] [*] [disks] [] []} {[*] [*] [storagepoolclaims] [] []} {[*] [*] [storagepools] [] []} {[*] [*] [castemplates] [] []} {[*] [*] [runtasks] [] []} {[*] [*] [cstorpools] [] []} {[*] [*] [cstorvolumereplicas] [] []} {[*] [*] [cstorvolumes] [] []} {[get] [] [] [] [/metrics]}] user=&{user.name@mayadata.io [system:authenticated] map[user-assertion.cloud.google.com:[AKUJVpmzjjLCED3Vk2Q7wSjXV1gJs/pA3V9ZW53TOjO5bHOExEps6b2IZRjnru9YBKvaj3pgVu+34A0fKIlmLXLHOQdL/uFA4WbKbKfMdi1XC52CcL8gGTXn0/G509L844+OiM+mDJUftls7uIgOIRFAyk2QBixnYv22ybLtO2n8kcpou+ZcNFEVAD6z8Xy3ZLEp9pMd9WdQuttS506x5HIQSpDggWFf9T96yPc0CYmVEmkJm+O7uw==]]} ownerrules=[{[create] [authorization.k8s.io] [selfsubjectaccessreviews selfsubjectrulesreviews] [] []} {[get] [] [] [] [/api /api/* /apis /apis/* /healthz /openapi /openapi/* /swagger-2.0.0.pb-v1 /swagger.json /swaggerapi /swaggerapi/* /version /version/]}] ruleResolutionErrors=[] -``` - -**Troubleshooting** - -You must enable RBAC before OpenEBS installation. This can be done from the kubernetes master console by executing the following command. - -``` -kubectl create clusterrolebinding -admin-binding --clusterrole=cluster-admin --user= -``` - -### iSCSI client is not setup on Nodes. Pod is in ContainerCreating state. {#install-failed-iscsi-not-configured} - -After OpenEBS installation, you may proceed with application deployment which will provision OpenEBS volume. This may fail due to the following error. This can be found by describing the application pod. - -```shell hideCopy -MountVolume.WaitForAttach failed for volume “pvc-ea5b871b-32d3-11e9-9bf5-0a8e969eb15a” : open /sys/class/iscsi_host: no such file or directory - -``` - -**Troubleshooting** - -This logs points that iscsid.service may not be enabled and running on your Nodes. You need to check if the service `iscsid.service` is running. If it is not running, you have to `enable` and `start` the service. You can refer [prerequisites](/user-guides/prerequisites) section and choose your platform to get the steps for enabling it. - -### Why does OpenEBS provisioner pod restart continuously?{#openebs-provisioner-restart-continuously} - -The following output displays the pod status of all namespaces in which the OpenEBS provisioner is restarting continuously. - -``` -NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE -default percona 0/1 Pending 0 36m -kube-system calico-etcd-tl4td 1/1 Running 0 1h 192.168.56.65 master -kube-system calico-kube-controllers-84fd4db7cd-jz9wt 1/1 Running 0 1h 192.168.56.65 master -kube-system calico-node-node1 2/2 Running 0 1h 192.168.56.65 master -kube-system calico-node-zt95x 2/2 Running 0 1h 192.168.56.66 node -kube-system coredns-78fcdf6894-2test 1/1 Running 0 1h 192.168.219.65 master -kube-system coredns-78fcdf6894-test7 1/1 Running 0 1h 192.168.219.66 master -kube-system etcd-master 1/1 Running 0 1h 192.168.56.65 master -kube-system kube-apiserver-master 1/1 Running 0 1h 192.168.56.65 master -kube-system kube-controller-manager-master 1/1 Running 0 1h 192.168.56.65 master -kube-system kube-proxy-9t98s 1/1 Running 0 1h 192.168.56.65 master -kube-system kube-proxy-mwk9f 1/1 Running 0 1h 192.168.56.66 node -kube-system kube-scheduler-master 1/1 Running 0 1h 192.168.56.65 master -openebs maya-apiserver-5598cf68ff-pod17 1/1 Running 0 1h 192.168.167.131 node -openebs openebs-provisioner-776846bbff-pod19 0/1 CrashLoopBackOff 16 1h 192.168.167.129 node -openebs openebs-snapshot-operator-5b5f97dd7f-np79k 0/2 CrashLoopBackOff 32 1h 192.168.167.130 node -``` - -**Troubleshooting** - -Perform the following steps to verify if the issue is due to misconfiguration while installing the network component. - -1. Check if your network related pods are running fine. - -2. Check if OpenEBS provisioner HTTPS requests are reaching the apiserver - -3. Use the latest version of network provider images. - -4. Try other network components such as Calico, kube-router etc. if you are not using any of these. - -### OpenEBS installation fails on Azure {#install-failed-azure-no-rbac-set} - -On AKS, while installing OpenEBS using Helm, you may see the following error. - -``` -$ helm install openebs/openebs --name openebs --namespace openebs -``` - -```shell hideCopy -Error: release openebs failed: clusterroles.rbac.authorization.k8s.io "openebs" isforbidden: attempt to grant extra privileges:[PolicyRule{Resources:["nodes"], APIGroups:["*"],Verbs:["get"]} PolicyRule{Resources:["nodes"],APIGroups:["*"], Verbs:["list"]}PolicyRule{Resources:["nodes"], APIGroups:["*"],Verbs:["watch"]} PolicyRule{Resources:["nodes/proxy"],APIGroups:["*"], Verbs:["get"]}PolicyRule{Resources:["nodes/proxy"], APIGroups:["*"],Verbs:["list"]} PolicyRule{Resources:["nodes/proxy"],APIGroups:["*"], Verbs:["watch"]}PolicyRule{Resources:["namespaces"], APIGroups:["*"],Verbs:["*"]} PolicyRule{Resources:["services"],APIGroups:["*"], Verbs:["*"]} PolicyRule{Resources:["pods"],APIGroups:["*"], Verbs:["*"]}PolicyRule{Resources:["deployments"], APIGroups:["*"],Verbs:["*"]} PolicyRule{Resources:["events"],APIGroups:["*"], Verbs:["*"]}PolicyRule{Resources:["endpoints"], APIGroups:["*"],Verbs:["*"]} PolicyRule{Resources:["persistentvolumes"],APIGroups:["*"], Verbs:["*"]} PolicyRule{Resources:["persistentvolumeclaims"],APIGroups:["*"], Verbs:["*"]}PolicyRule{Resources:["storageclasses"],APIGroups:["storage.k8s.io"], Verbs:["*"]}PolicyRule{Resources:["storagepools"], APIGroups:["*"],Verbs:["get"]} PolicyRule{Resources:["storagepools"], APIGroups:["*"],Verbs:["list"]} PolicyRule{NonResourceURLs:["/metrics"],Verbs:["get"]}] user=&{system:serviceaccount:kube-system:tiller6f3172cc-4a08-11e8-9af5-0a58ac1f1729 [system:serviceaccounts system:serviceaccounts:kube-systemsystem:authenticated] map[]} ownerrules=[]ruleResolutionErrors=[clusterroles.rbac.authorization.k8s.io"cluster-admin" not found] -``` - -**Troubleshooting** - -You must enable RBAC on Azure before OpenEBS installation. For more details, see [Prerequisites](/user-guides/prerequisites). - -### A multipath.conf file claims all SCSI devices in OpenShift {#multipath-conf-claims-all-scsi-devices-openshift} - -A multipath.conf file without either find_multipaths or a manual blacklist claims all SCSI devices. - -#### Workaround - -1. Add the find _multipaths line to_ \_/etc/multipath.conf\_ file similar to the following snippet. - - ``` - defaults { - user_friendly_names yes - find_multipaths yes - } - ``` - -2. Run `multipath -w /dev/sdc` command (replace the devname with your persistent devname). - -## See Also: - -[FAQs](/docs/additional-info/faqs) [Seek support or help](/docs/introduction/community) [Latest release notes](/docs/introduction/releases) diff --git a/docs/main/troubleshooting/localpv.md b/docs/main/troubleshooting/localpv.md deleted file mode 100644 index b93f62c1a..000000000 --- a/docs/main/troubleshooting/localpv.md +++ /dev/null @@ -1,150 +0,0 @@ ---- -id: localpv -title: Troubleshooting OpenEBS - Dynamic LocalPV -keywords: - - OpenEBS - - LocalPV - - Dynamic LocalPV - - LocalPV troubleshooting - - Dynamic LocalPV troubleshooting -description: This page contains a list of Dynamic LocalPV / LocalPV related troubleshooting information. ---- - -### General guidelines for troubleshooting - -- Contact [OpenEBS Community](/docs/introduction/community) for support. -- Search for similar issues added in this troubleshooting section. -- Search for any reported issues on [StackOverflow under OpenEBS tag](https://stackoverflow.com/questions/tagged/openebs) - -[LocalPV PVC in Pending state](#pvc-in-pending-state) - -[Application pod using LocalPV device not coming into running state](#application-pod-stuck-pending-pvc) - -[Stale BDC in pending state after PVC is deleted](#stale-bdc-after-pvc-deletion) - -[BDC created by localPV in pending state](#bdc-by-localpv-pending-state) - -### PVC in Pending state {#pvc-in-pending-state} - -Created a PVC using localpv-device / localpv-hostpath storage class. But the PV is not created and PVC in Pending state. - -**Troubleshooting:** -The default localpv storage classes from openebs have `volumeBindingMode: WaitForFirstConsumer`. This means that only when the application pod that uses the PVC is scheduled to a node, the provisioner will receive the volume provision request and will create the volume. - -**Resolution:** -Deploy an application that uses the PVC and the PV will be created and application will start using the PV - -### Application pod using LocalPV not coming into running state {#application-pod-stuck-pending-pvc} - -Application pod that uses localpv device is stuck in `Pending` state with error - -```shell hideCopy -Warning FailedScheduling 7m24s (x2 over 7m24s) default-scheduler persistentvolumeclaim "" not found -``` - -**Troubleshooting:** -Check if there is a blockdevice present on the node (to which the application pod was scheduled,) which matches the capacity requirements of the PVC. - -``` -kubectl get bd -n openebs -o wide -``` - -If matching blockdevices are not present, then the PVC will never get Bound. - -**Resolution:** -Schedule the application pod to a node which has a matching blockdevice available on it. - -### Stale BDC in pending state after PVC is deleted {#stale-bdc-after-pvc-deletion} - -``` -kubectl get bdc -n openebs -``` - -shows stale `Pending` BDCs created by localpv provisioner, even after the corresponding PVC has been deleted. - -**Resolution:** -LocalPV provisioner currently does not delete BDCs in Pending state if the corresponding PVCs are deleted. To remove the stale BDC entries, - -1. Edit the BDC and remove the `- local.openebs.io/finalizer` finalizer - -``` -kubectl edit bdc -n openebs -``` - -2. Delete the BDC - -``` -kubectl delete bdc -n openebs -``` - -### BDC created by localPV in pending state {#bdc-by-localpv-pending-state} - -The BDC created by localpv provisioner (bdc-pvc-xxxx) remains in pending state and PVC does not get Bound - -**Troubleshooting:** -Describe the BDC to check the events recorded on the resource - -``` -kubectl describe bdc bdc-pvc-xxxx -n openebs -``` - -The following are different types of messages shown when the node on which localpv application pod is scheduled, does not have a blockdevice available. - -1. No blockdevices found - -```shell hideCopy -Warning SelectionFailed 14m (x25 over 16m) blockdeviceclaim-operator no blockdevices found -``` - -It means that there were no matching blockdevices after listing based on the labels. Check if there is any `block-device-tag` on the storage class and corresponding tags are available on the blockdevices also - -2. No devices with matching criteria - -```shell hideCopy -Warning SelectionFailed 6m25s (x18 over 11m) blockdeviceclaim-operator no devices found matching the criteria -``` - -It means that the there are no devices for claiming after filtering based on filesystem type and node name. Make sure the blockdevices on the node -have the correct filesystem as mentioned in the storage class (default is `ext4`) - -3. No devices with matching resource requirements - -```shell hideCopy -Warning SelectionFailed 85s (x74 over 11m) blockdeviceclaim-operator could not find a device with matching resource requirements -``` - -It means that there are no devices available on the node with a matching capacity requirement. - -**Resolution** - -To schedule the application pod to a node, which has the blockdevices available, a node selector can be used on the application pod. Here the node with hostname `svc1` has blockdevices available, so a node selector is used to schedule the pod to that node. - -Example: - -``` -apiVersion: v1 -kind: Pod -metadata: - name: pod1 -spec: - volumes: - - name: local-storage - persistentVolumeClaim: - claimName: pvc1 - containers: - - name: hello-container - image: busybox - command: - - sh - - -c - - 'while true; do echo "`date` [`hostname`] Hello from OpenEBS Local PV." >> /mnt/store/greet.txt; sleep $(($RANDOM % 5 + 300)); done' - volumeMounts: - - mountPath: /mnt/store - name: local-storage - nodeSelector: - kubernetes.io/hostname: svc1 -``` - -## See Also: - -[FAQs](/docs/additional-info/faqs) [Seek support or help](/docs/introduction/community) [Latest release notes](/docs/introduction/releases) diff --git a/docs/main/troubleshooting/mayastor.md b/docs/main/troubleshooting/mayastor.md deleted file mode 100644 index b0c9503c1..000000000 --- a/docs/main/troubleshooting/mayastor.md +++ /dev/null @@ -1,15 +0,0 @@ ---- -id: mayastor -title: Troubleshooting Mayastor -keywords: - - OpenEBS - - Mayastor - - Mayastor troubleshooting -description: This page contains information regarding Mayastor related troubleshooting. ---- - -## Troubleshooting - -:::note -This page has moved to https://mayastor.gitbook.io/introduction/quickstart/troubleshooting. -::: diff --git a/docs/main/troubleshooting/troubleshooting-replicated-storage.md b/docs/main/troubleshooting/troubleshooting-replicated-storage.md index 1c410caef..f7d1d1466 100644 --- a/docs/main/troubleshooting/troubleshooting-replicated-storage.md +++ b/docs/main/troubleshooting/troubleshooting-replicated-storage.md @@ -173,7 +173,7 @@ There is NO WARRANTY, to the extent permitted by law. Type "show copying" and "show warranty" for details. This GDB was configured as "x86_64-linux-gnu". Type "show configuration" for configuration details. -For bug reporting instructions, please see: +For bug reporting instructions, see: . Find the GDB manual and other documentation resources online at: . diff --git a/docs/main/troubleshooting/uninstall.md b/docs/main/troubleshooting/uninstall.md deleted file mode 100644 index f71924380..000000000 --- a/docs/main/troubleshooting/uninstall.md +++ /dev/null @@ -1,52 +0,0 @@ ---- -id: uninstall -title: Troubleshooting OpenEBS - Uninstall -keywords: - - OpenEBS - - OpenEBS uninstallation - - OpenEBS uninstallation troubleshooting -description: This page contains a list of OpenEBS uninstallation related troubleshooting information. ---- - -## General guidelines for troubleshooting - -- Contact [OpenEBS Community](/docs/introduction/community) for support. -- Search for similar issues added in this troubleshooting section. -- Search for any reported issues on [StackOverflow under OpenEBS tag](https://stackoverflow.com/questions/tagged/openebs) - -## Uninstall - -[Whenever a Jiva PVC is deleted, a job will created and status is seeing as `completed`](#jiva-deletion-scrub-job) - -[cStor Volume Replicas are not getting deleted properly](#cvr-deletion) - -### Whenever a Jiva based PVC is deleted, a new job gets created.{#jiva-deletion-scrub-job} - -As part of deleting the Jiva Volumes, OpenEBS launches scrub jobs for clearing data from the nodes. This job will be running in OpenEBS installed namespace. The completed jobs can be cleared using following command. - -``` -kubectl delete jobs -l openebs.io/cas-type=jiva -n -``` - -In addition, the job is set with a TTL to get cleaned up, if the cluster version is greater than 1.12. However, for the feature to work, the alpha feature needs to be enabled in the cluster. More information can be read from [here](https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/#clean-up-finished-jobs-automatically). - -### cStor Volume Replicas are not getting deleted properly{#cvr-deletion} - -Sometimes, there are chances that cStor volumes Replicas (CVR) may not be deleted properly if some unforeseen scenarios happened such as network loss during the deletion of PVC. To resolve this issue, perform the following command. - -``` -kubectl edit cvr -n openebs -``` - -And then remove finalizers from the corresponding CVR. Need to remove following entries and save it. - -```shell hideCopy -finalizers: -- cstorvolumereplica.openebs.io/finalizer -``` - -This will automatically remove the pending CVR and delete the cStor volume completely. - -## See Also: - -[FAQs](/docs/additional-info/faqs) [Seek support or help](/docs/introduction/community) [Latest release notes](/docs/introduction/releases) diff --git a/docs/main/troubleshooting/volume-provisioning.md b/docs/main/troubleshooting/volume-provisioning.md deleted file mode 100644 index cc49b816a..000000000 --- a/docs/main/troubleshooting/volume-provisioning.md +++ /dev/null @@ -1,543 +0,0 @@ ---- -id: volume-provisioning -title: Troubleshooting OpenEBS - Provisioning -keywords: - - OpenEBS - - Volume Provisioning - - Volume Provisioning troubleshooting -description: This page contains a list of volume provisioning related troubleshooting information. ---- - -## General guidelines for troubleshooting - -- Contact [OpenEBS Community](/docs/introduction/community) for support. -- Search for similar issues added in this troubleshooting section. -- Search for any reported issues on [StackOverflow under OpenEBS tag](https://stackoverflow.com/questions/tagged/openebs) - -[Application complaining ReadOnly filesystem](#application-read-only) - -[Unable to create persistentVolumeClaim due to certificate verification error](#admission-server-ca) - -[Application pods are not running when OpenEBS volumes are provisioned on Rancher](#application-pod-not-running-Rancher) - -[Application pod is stuck in ContainerCreating state after deployment](#application-pod-stuck-after-deployment) - -[Creating cStor pool fails on CentOS when there are partitions on the disk](#cstor-pool-failed-centos-partition-disk) - -[Application pod enters CrashLoopBackOff state](#application-crashloopbackoff) - -[cStor pool pods are not running](#cstor-pool-pod-not-running) - -[OpenEBS Jiva PVC is not provisioning in 0.8.0](#Jiva-provisioning-failed-080) - -[Recovery procedure for Read-only volume where kubelet is running in a container](#recovery-readonly-when-kubelet-is-container) - -[Recovery procedure for Read-only volume for XFS formatted volumes](#recovery-readonly-xfs-volume) - -[Unable to clone OpenEBS volume from snapshot](#unable-to-clone-from-snapshot) - -[Unable to mount XFS formatted volumes into Pod](#unable-to-mount-xfs-volume) - -[Unable to create or delete a PVC](#unable-to-create-or-delete-a-pvc) - -[Unable to provision cStor on DigitalOcean](#unable-to-provision-openebs-volume-on-DigitalOcean) - -[Persistent volumes indefinitely remain in pending state](#persistent-volumes-indefinitely-remain-in-pending-state) - -### Application complaining ReadOnly filesystem {#application-read-only} - -Application sometimes complain about the underlying filesystem has become ReadOnly. - -**Troubleshooting** - -This can happen for many reasons. - -- The cStor target pod is evicted because of resource constraints and is not scheduled within time -- Node is rebooted in adhoc manner (or unscheduled reboot) and Kubernetes is waiting for Kubelet to respond to know if the node is rebooted and the pods on that node need to be rescheduled. Kubernetes can take up to 30 minutes as timeout before deciding the node is going to stay offline and pods need to be rescheduled. During this time, the iSCSI initiator at the application pod has timeout and marked the underlying filesystem as ReadOnly -- cStor target has lost quorum because of underlying node losses and target has marked the lun as ReadOnly - -Go through the Kubelet logs and application pod logs to know the reason for marking the ReadOnly and take appropriate action. [Maintaining volume quorum](/additional-info/k8supgrades) is necessary during Kubernetes node reboots. - -### Unable to create persistentVolumeClaim due to certificate verification error {#admission-server-ca} - -An issue can appear when creating a PersistentVolumeClaim: - -```shell hideCopy -Error from server (InternalError):Internal error occurred: failed calling webhook "admission-webhook.openebs.io": Post https://admission-server-svc.openebs.svc:443/validate?timeout=30s: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "admission-server-ca") -``` - -**Troubleshooting** - -By default OpenEBS chart generates TLS certificates used by the `openebs-admission-controller`, while this is handy, it requires the admission controller to restart on each `helm upgrade` command. For most of the use cases, the admission controller would have restarted to update the certificate configurations, if not , then user will get the above mentioned error. - -**Workaround** - -This can be fixed by restarting the admission controller: - -``` -kubectl -n openebs get pods -o name | grep admission-server | xargs kubectl -n openebs delete -``` - -### Application pods are not running when OpenEBS volumes are provisioned on Rancher{#application-pod-not-running-Rancher} - -The setup environment where the issue occurs is rancher/rke with bare metal hosts running CentOS. After installing OpenEBS, OpenEBS pods are running, but application pod is in _ContainerCreating_ state. It consume Jiva volume. The output of `kubectl get pods` is displayed as follows. - -```shell hideCopy -NAME READY STATUS RESTARTS AGE -nginx-deployment-57849d9f57-12345 0/1 ContainerCreating 0 2m -pvc-adb79406-8e3e-11e8-a06a-001c42c2325f-ctrl-58dcdf997f-n4kd9 2/2 Running 0 8m -pvc-adb79406-8e3e-11e8-a06a-001c42c2325f-rep-696b599894-gq4z6 1/1 Running 0 8m -pvc-adb79406-8e3e-11e8-a06a-001c42c2325f-rep-696b599894-hwx52 1/1 Running 0 8m -pvc-adb79406-8e3e-11e8-a06a-001c42c2325f-rep-696b599894-vs97n 1/1 Running 0 8m -``` - -**Troubleshooting** - -Make sure the following prerequisites are done. - -1. Verify iSCSI initiator is installed on nodes and services are running. - -2. Added extra_binds under kubelet service in cluster YAML - -More details are mentioned [here](/user-guides/prerequisites#rancher). - -### Application pod is stuck in ContainerCreating state after deployment{#application-pod-stuck-after-deployment} - -**Troubleshooting** - -- Obtain the output of the `kubectl describe pod ` and check the events. - -- If the error message _executable not found in $PATH_ is found, check whether the iSCSI initiator utils are installed on the node/kubelet container (rancherOS, coreOS). If not, install the same and retry deployment. - -- If the warning message `FailedMount: Unable to mount volumes for pod <>: timeout expired waiting for volumes to attach/mount` is persisting use the following procedure. - - 1. Check whether the Persistent Volume Claim/Persistent Volume (PVC/PV) are created successfully and the OpenEBS controller and replica pods are running. These can be verified using the `kubectl get pvc,pv` and `kubectl get pods`command. - - 2. If the OpenEBS volume pods are not created, and the PVC is in pending state, check whether the storageclass referenced by the application PVC is available/installed. This can be confirmed using the `kubectl get sc` command. If this storageclass is not created, or improperly created without the appropriate attributes, recreate the same and re-deploy the application. - - **Note:** Ensure that the older PVC objects are deleted before re-deployment. - - 3. If the PV is created (in bound state), but replicas are not running or are in pending state, perform a `kubectl describe ` and check the events. If the events indicate _FailedScheduling due to Insufficient cpu, NodeUnschedulable or MatchInterPodAffinity and PodToleratesNodeTaints_, check the following: - - - replica count is equal to or lesser than available schedulable nodes - - there are enough resources on the nodes to run the replica pods - - whether nodes are tainted and if so, whether they are tolerated by the OpenEBS replica pods - - Ensure that the above conditions are met and the replica rollout is successful. This will ensure application enters running state. - - 4. If the PV is created and OpenEBS pods are running, use the `iscsiadm -m session` command on the node (where the pod is scheduled) to identify whether the OpenEBS iSCSI volume has been attached/logged-into. If not, verify network connectivity between the nodes. - - 5. If the session is present, identify the SCSI device associated with the session using the command `iscsiadm -m session -P 3`. Once it is confirmed that the iSCSI device is available (check the output of `fdisk -l` for the mapped SCSI device), check the kubelet and system logs including the iscsid and kernel (syslog) for information on the state of this iSCSI device. If inconsistencies are observed, execute the filesyscheck on the device `fsck -y /dev/sd<>`. This will mount the volume to the node. - -- In OpenShift deployments, you may face this issue with the OpenEBS replica pods continuously restarting, that is, they are in crashLoopBackOff state. This is due to the default "restricted" security context settings. Edit the following settings using `oc edit scc restricted` to get the application pod running. - - - _allowHostDirVolumePlugin: true_ - - _runAsUser: runAsAny_ - -### Creating cStor pool fails on CentOS when there are partitions on the disk. {#cstor-pool-failed-centos-partition-disk} - -Creating cStor pool fails with the following error message: - -```shell hideCopy -E0920 14:51:17.474702 8 pool.go:78] Unable to create pool: /dev/disk/by-id/ata-WDC_WD2500BOOM-00JJ -``` - -sdb and sdc are used for cStor pool creation. - -``` -core@k8worker02 ~ $ lsblk -NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT -sda 8:0 0 111.8G 0 disk -|-sda1 8:1 0 128M 0 part /boot -|-sda2 8:2 0 2M 0 part -|-sda3 8:3 0 1G 0 part -| `-usr 254:0 0 1016M 1 crypt /usr -|-sda4 8:4 0 1G 0 part -|-sda6 8:6 0 128M 0 part /usr/share/oem -|-sda7 8:7 0 64M 0 part -`-sda9 8:9 0 109.5G 0 part / -sdb 8:16 0 111.8G 0 disk -sdc 8:32 0 232.9G 0 disk -|-sdc1 8:33 0 1G 0 part -`-sdc2 8:34 0 231.9G 0 part - |-cl-swap 254:1 0 7.8G 0 lvm - |-cl-home 254:2 0 174.1G 0 lvm - `-cl-root 254:3 0 50G 0 lvm -``` - -**Troubleshooting** - -1. Clear the partitions on the portioned disk. - -2. Run the following command on the host machine to check any LVM handler on the device. - - ``` - sudo dmsetup info -C - ``` - - Output of the above command will be similar to the following. - - ```shell hideCopy - Name Maj Min Stat Open Targ Event UUID - usr 254 0 L--r 1 1 0 CRYPT-VERITY-959135d6b3894b3b8125503de238d5c4-usr - centos-home 254 2 L--w 0 1 0 LVM-1kqWMeQWqH3qTsiHhYw3ygAzOvpfDL58dDmziWBI0panwOGRq2rp9PjpmE6qdf1V - centos-swap 254 1 L--w 0 1 0 LVM-1kqWMeQWqH3qTsiHhYw3ygAzOvpfDL58UIVFhLkzvE1mk7uCy2nePlktBHfTuTYF - centos-root 254 3 L--w 0 1 0 LVM-1kqWMeQWqH3qTsiHhYw3ygAzOvpfDL58WULaIYm0X7QmrwQaWYxz1hTwzWocAwYJ - ``` - - If the output is similar to the above, you must remove the handler on the device. - - ``` - sudo dmsetup remove centos-home - sudo dmsetup remove centos-swap - sudo dmsetup remove centos-root - ``` - -### Application pod enters CrashLoopBackOff states {#application-crashloopbackoff} - -Application pod enters CrashLoopBackOff state - -This issue is due to failed application operations in the container. Typically this is caused due to failed writes on the mounted PV. To confirm this, check the status of the PV mount inside the application pod. - -**Troubleshooting** - -- Perform a `kubectl exec -it ` bash (or any available shell) on the application pod and attempt writes on the volume mount. The volume mount can be obtained either from the application specification ("volumeMounts" in container spec) or by performing a `df -h` command in the controller shell (the OpenEBS iSCSI device will be mapped to the volume mount). -- The writes can be attempted using a simple command like `echo abc > t.out` on the mount. If the writes fail with _Read-only file system errors_, it means the iSCSI connections to the OpenEBS volumes are lost. You can confirm by checking the node's system logs including iscsid, kernel (syslog) and the kubectl logs (`journalctl -xe, kubelet.log`). -- iSCSI connections usually fail due to the following. - - flaky networks (can be confirmed by ping RTTs, packet loss etc.) or failed networks between - - - OpenEBS PV controller and replica pods - - Application and controller pods - - Node failures - - OpenEBS volume replica crashes or restarts due to software bugs -- In all the above cases, loss of the device for a period greater than the node iSCSI initiator timeout causes the volumes to be re-mounted as RO. -- In certain cases, the node/replica loss can lead to the replica quorum not being met (i.e., less than 51% of replicas available) for an extended period of time, causing the OpenEBS volume to be presented as a RO device. - -**Workaround/Recovery** - -The procedure to ensure application recovery in the above cases is as follows: - -1. Resolve the system issues which caused the iSCSI disruption/RO device condition. Depending on the cause, the resolution steps may include recovering the failed nodes, ensuring replicas are brought back on the same nodes as earlier, fixing the network problems and so on. - -2. Ensure that the OpenEBS volume controller and replica pods are running successfully with all replicas in _RW mode_. Use the command `curl GET http://:9501/v1/replicas | grep createTypes` to confirm. - -3. If anyone of the replicas are still in RO mode, wait for the synchronization to complete. If all the replicas are in RO mode (this may occur when all replicas re-register into the controller within short intervals), you must restart the OpenEBS volume controller using the `kubectl delete pod ` command . Since it is a Kubernetes deployment, the controller pod is restarted successfully. Once done, verify that all replicas transition into _RW mode_. - -4. Un-mount the stale iscsi device mounts on the application node. Typically, these devices are mounted in the `/var/lib/kubelet/plugins/kubernetes.io/iscsi/iface-default/-lun-0` path. - - Example: - - ``` - umount /var/lib/kubelet/plugins/kubernetes.io/iscsi/iface-default/10.39.241.26: - 3260-iqn.2016-09.com.openebs.jiva:mongo-jiva-mongo-persistent-storage-mongo-0-3481266901-lun-0 - - umount /var/lib/kubelet/pods/ae74da97-c852-11e8-a219-42010af000b6/volumes/kubernetes.io~iscsi/mongo-jiva-mongo-persistent-storage-mongo-0-3481266901 - ``` - -5. Identify whether the iSCSI session is re-established after failure. This can be verified using `iscsiadm -m session`, with the device mapping established using `iscsiadm -m session -P 3` and `fdisk -l`. **Note:** Sometimes, it is observed that there are stale device nodes (scsi device names) present on the Kubernetes node. Unless the logs confirm that a re-login has occurred after the system issues were resolved, it is recommended to perform the following step after doing a purge/logout of the existing session using `iscsiadm -m node -T -u`. - -6. If the device is not logged in again, ensure that the network issues/failed nodes/failed replicas are resolved, the device is discovered, and the session is re-established. This can be achieved using the commands `iscsiadm -m discovery -t st -p :3260` and `iscsiadm -m node -T -l` respectively. - -7. Identify the new SCSI device name corresponding to the iSCSI session (the device name may or may not be the same as before). - -8. Re-mount the new disk into the mountpoint mentioned earlier using the `mount -o rw,relatime,data=ordered /dev/sd<> ` command. If the re-mount fails due to inconsistencies on the device (unclean filesystem), perform a filesyscheck `fsck -y /dev/sd<>`. - -9. Ensure that the application uses the newly mounted disk by forcing it to restart on the same node. Use the command`docker stop ` of the application container on the node. Kubernetes will automatically restart the pod to ensure the "desirable" state. - - While this step may not be necessary most times (as the application is already undergoing periodic restarts as part of the CrashLoop cycle), it can be performed if the application pod's next restart is scheduled with an exponential back-off delay. - -**Notes:** - -1. The above procedure works for applications that are either pods or deployments/statefulsets. In case of the latter, the application pod can be restarted (i.e., deleted) after step-4 (iscsi logout) as the deployment/statefulset controller will take care of rescheduling the application on a same/different node with the volume. - -### cStor pool pods are not running {#cstor-pool-pod-not-running} - -The cStor disk pods are not coming up after it deploy with the YAML. On checking the pool pod logs, it says `/dev/xvdg is in use and contains a xfs filesystem.` - -**Workaround:** - -cStor can consume disks that are attached (are visible to OS as SCSI devices) to the Nodes and no need of format these disks. This means disks should not have any filesystem and it should be unmounted on the Node. It is also recommended to wipe out the disks if you are using an used disk for cStor pool creation. The following steps will clear the file system from the disk. - -``` -sudo umount -wipefs -a -``` - -The following is an example output of `lsblk` on node. - -```shell hideCopy -NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT -loop0 7:0 0 89M 1 loop /snap/core/7713 -loop1 7:1 0 18M 1 loop /snap/amazon-ssm-agent/1480 -xvda 202:0 0 128G 0 disk -└─xvda1 202:1 0 128G 0 part / -xvdf 202:80 0 50G 0 disk /home/openebs-ebs -``` - -From the above output, it shows that `/dev/xvdf` is mounted on `/home/openebs-ebs`. The following commands will unmount disk first and then remove the file system. - -``` -sudo umount /dev/xvdf -wipefs -a /dev/xvdf -``` - -After performing the above commands, verify the disk status using `lsblk` command: - -Example output: - -```shell hideCopy -ubuntu@ip-10-5-113-122:~$ lsblk -NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT -loop0 7:0 0 89M 1 loop /snap/core/7713 -loop1 7:1 0 18M 1 loop /snap/amazon-ssm-agent/1480 -xvda 202:0 0 128G 0 disk -└─xvda1 202:1 0 128G 0 part / -xvdf 202:80 0 50G 0 disk -``` - -### OpenEBS Jiva PVC is not provisioning in 0.8.0 {#Jiva-provisioning-failed-080} - -Even all OpenEBS pods are in running state, unable to provision Jiva volume if you install through helm. - -**Troubleshooting:** - -Check the latest logs showing in the OpenEBS provisioner logs. If the particular PVC creation entry logs are not coming on the OpenEBS provisioner pod, then restart the OpenEBS provisioner pod. From 0.8.1 version, liveness probe feature will check the OpenEBS provisioner pod status periodically and ensure its availability for OpenEBS PVC creation. - -### Recovery procedure for Read-only volume where kubelet is running in a container. {#recovery-readonly-when-kubelet-is-container} - -In environments where the kubelet runs in a container, perform the following steps as part of the recovery procedure for a Volume-Read only issue. - -1. Confirm that the OpenEBS target does not exist as a Read Only device by the OpenEBS controller and that all replicas are in Read/Write mode. - - Un-mount the iSCSI volume from the node in which the application pod is scheduled. - - Perform the following iSCSI operations from inside the kubelet container. - - Logout - - Rediscover - - Login - - Perform the following iSCSI operations from inside the kubelet container. - - Re-mount the iSCSI device (may appear with a new SCSI device name) on the node. - - Verify if the application pod is able to start using/writing into the newly mounted device. -2. Once the application is back in "Running" state post recovery by following steps 1-9, if existing/older data is not visible (i.e., it comes up as a fresh instance), it is possible that the application pod is using the docker container filesystem instead of the actual PV (observed sometimes due to the reconciliation attempts by Kubernetes to get the pod to a desired state in the absence of the mounted iSCSI disk). This can be checked by performing a `df -h` or `mount` command inside the application pods. These commands should show the scsi device `/dev/sd*` mounted on the specified mount point. If not, the application pod can be forced to use the PV by restarting it (deployment/statefulset) or performing a docker stop of the application container on the node (pod). - -### Recovery procedure for Read-only volume for XFS formatted volumes {#recovery-readonly-xfs-volume} - -In case of `XFS` formatted volumes, perform the following steps once the iSCSI target is available in RW state & logged in: - -- Un-mount the iSCSI volume from the node in which the application pod is scheduled. This may cause the application to enter running state by using the local mount point. -- Mount to volume to a new (temp) directory to replay the metadata changes in the log -- Unmount the volume again -- Perform `xfs_repair /dev/`. This fixes if any file system related errors on the device -- Perform application pod deletion to facilitate fresh mount of the volume. At this point, the app pod may be stuck on `terminating` OR `containerCreating` state. This can be resolved by deleting the volume folder (w/ app content) on the local directory. - -### Unable to clone OpenEBS volume from snapshot {#unable-to-clone-from-snapshot} - -Taken a snapshot of a PVC successfully. But unable to clone the volume from the snapshot. - -**Troubleshooting:** - -Logs from snapshot-controller pods are follows. - -```shell hideCopy -ERROR: logging before flag.Parse: I0108 18:11:54.017909 1 volume.go:73] OpenEBS volume provisioner namespace openebs -I0108 18:11:54.181897 1 snapshot-controller.go:95] starting snapshot controller -I0108 18:11:54.200069 1 snapshot-controller.go:167] Starting snapshot controller -I0108 18:11:54.200139 1 controller_utils.go:1027] Waiting for caches to sync for snapshot-controller controller -I0108 18:11:54.300430 1 controller_utils.go:1034] Caches are synced for snapshot-controller controller -I0108 23:12:26.170921 1 snapshot-controller.go:190] [CONTROLLER] OnAdd /apis/volumesnapshot.external-storage.k8s.io/v1/namespaces/default/volumesnapshots/xl-release-snapshot, Snapshot &v1.VolumeSnapshot{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, Metadata:v1.ObjectMeta{Name:"xl-release-snapshot", GenerateName:"", Namespace:"default", SelfLink:"/apis/volumesnapshot.external-storage.k8s.io/v1/namespaces/default/volumesnapshots/xl-release-snapshot", UID:"dc804d0d-139a-11e9-9561-005056949728", ResourceVersion:"2072353", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63682585945, loc:(*time.Location)(0x2a17900)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"volumesnapshot.external-storage.k8s.io/v1\",\"kind\":\"VolumeSnapshot\",\"metadata\":{\"annotations\":{},\"name\":\"xl-release-snapshot\",\"namespace\":\"default\"},\"spec\":{\"persistentVolumeClaimName\":\"xlr-data-pvc\"}}\n"}, OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.VolumeSnapshotSpec{PersistentVolumeClaimName:"xlr-data-pvc", SnapshotDataName:""}, Status:v1.VolumeSnapshotStatus{CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Conditions:[]v1.VolumeSnapshotCondition(nil)}} -I0108 23:12:26.210135 1 desired_state_of_world.go:76] Adding new snapshot to desired state of world: default/xl-release-snapshot-dc804d0d-139a-11e9-9561-005056949728 -E0108 23:12:26.288184 1 snapshotter.go:309] No conditions for this snapshot yet. -I0108 23:12:26.295175 1 snapshotter.go:160] No VolumeSnapshotData objects found on the API server -I0108 23:12:26.295224 1 snapshotter.go:458] findSnapshot: snapshot xl-release-snapshot -I0108 23:12:26.355476 1 snapshotter.go:469] findSnapshot: find snapshot xl-release-snapshot by tags &map[]. -I0108 23:12:26.355550 1 processor.go:183] FindSnapshot by tags: map[string]string(nil) -I0108 23:12:26.355575 1 snapshotter.go:449] syncSnapshot: Creating snapshot default/xl-release-snapshot-dc804d0d-139a-11e9-9561-005056949728 ... -I0108 23:12:26.355603 1 snapshotter.go:491] createSnapshot: Creating snapshot default/xl-release-snapshot-dc804d0d-139a-11e9-9561-005056949728 through the plugin ... -I0108 23:12:26.373908 1 snapshotter.go:497] createSnapshot: Creating metadata for snapshot default/xl-release-snapshot-dc804d0d-139a-11e9-9561-005056949728. -I0108 23:12:26.373997 1 snapshotter.go:701] In updateVolumeSnapshotMetadata -I0108 23:12:26.380908 1 snapshotter.go:721] updateVolumeSnapshotMetadata: Metadata UID: dc804d0d-139a-11e9-9561-005056949728 Metadata Name: xl-release-snapshot Metadata Namespace: default Setting tags in Metadata Labels: map[string]string{"SnapshotMetadata-Timestamp":"1546989146380869451", "SnapshotMetadata-PVName":"pvc-5f9bd5ec-1398-11e9-9561-005056949728"}. -I0108 23:12:26.391791 1 snapshot-controller.go:197] [CONTROLLER] OnUpdate oldObj: v1.VolumeSnapshotSpec{PersistentVolumeClaimName:"xlr-data-pvc", SnapshotDataName:""} -I0108 23:12:26.391860 1 snapshot-controller.go:198] [CONTROLLER] OnUpdate newObj: v1.VolumeSnapshotSpec{PersistentVolumeClaimName:"xlr-data-pvc", SnapshotDataName:""} -I0108 23:12:26.392281 1 snapshotter.go:742] updateVolumeSnapshotMetadata: returning cloudTags [map[string]string{"kubernetes.io/created-for/snapshot/namespace":"default", "kubernetes.io/created-for/snapshot/name":"xl-release-snapshot", "kubernetes.io/created-for/snapshot/uid":"dc804d0d-139a-11e9-9561-005056949728", "kubernetes.io/created-for/snapshot/timestamp":"1546989146380869451"}] -I0108 23:12:26.392661 1 snapshot.go:53] snapshot Spec Created: -{"metadata":{"name":"pvc-5f9bd5ec-1398-11e9-9561-005056949728_xl-release-snapshot_1546989146392411824","namespace":"default","creationTimestamp":null},"spec":{"casType":"jiva","volumeName":"pvc-5f9bd5ec-1398-11e9-9561-005056949728"}} -I0108 23:12:26.596285 1 snapshot.go:84] Snapshot Successfully Created: -{"apiVersion":"v1alpha1","kind":"CASSnapshot","metadata":{"name":"pvc-5f9bd5ec-1398-11e9-9561-005056949728_xl-release-snapshot_1546989146392411824"},"spec":{"casType":"jiva","volumeName":"pvc-5f9bd5ec-1398-11e9-9561-005056949728"}} -I0108 23:12:26.596362 1 snapshotter.go:276] snapshot created: &{ 0xc420038a00}. Conditions: &[]v1.VolumeSnapshotCondition{v1.VolumeSnapshotCondition{Type:"Ready", Status:"True", LastTransitionTime:v1.Time{Time:time.Time{wall:0xbf056976a38b90b7, ext:18032657942280, loc:(*time.Location)(0x2a17900)}}, Reason:"", Message:"Snapshot created successfully"}} -I0108 23:12:26.596439 1 snapshotter.go:508] createSnapshot: create VolumeSnapshotData object for VolumeSnapshot default/xl-release-snapshot-dc804d0d-139a-11e9-9561-005056949728. -I0108 23:12:26.596478 1 snapshotter.go:533] createVolumeSnapshotData: Snapshot default/xl-release-snapshot-dc804d0d-139a-11e9-9561-005056949728. Conditions: &[]v1.VolumeSnapshotCondition{v1.VolumeSnapshotCondition{Type:"Ready", Status:"True", LastTransitionTime:v1.Time{Time:time.Time{wall:0xbf056976a38b90b7, ext:18032657942280, loc:(*time.Location)(0x2a17900)}}, Reason:"", Message:"Snapshot created successfully"}} -I0108 23:12:26.604409 1 snapshotter.go:514] createSnapshot: Update VolumeSnapshot status and bind VolumeSnapshotData to VolumeSnapshot default/xl-release-snapshot-dc804d0d-139a-11e9-9561-005056949728. -I0108 23:12:26.604456 1 snapshotter.go:860] In bindVolumeSnapshotDataToVolumeSnapshot -I0108 23:12:26.604472 1 snapshotter.go:862] bindVolumeSnapshotDataToVolumeSnapshot: Namespace default Name xl-release-snapshot -I0108 23:12:26.608792 1 snapshotter.go:877] bindVolumeSnapshotDataToVolumeSnapshot: Updating VolumeSnapshot object [&v1.VolumeSnapshot{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, Metadata:v1.ObjectMeta{Name:"xl-release-snapshot", GenerateName:"", Namespace:"default", SelfLink:"/apis/volumesnapshot.external-storage.k8s.io/v1/namespaces/default/volumesnapshots/xl-release-snapshot", UID:"dc804d0d-139a-11e9-9561-005056949728", ResourceVersion:"2072354", Generation:2, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63682585945, loc:(*time.Location)(0x2a17900)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"SnapshotMetadata-Timestamp":"1546989146380869451", "SnapshotMetadata-PVName":"pvc-5f9bd5ec-1398-11e9-9561-005056949728"}, Annotations:map[string]string{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"volumesnapshot.external-storage.k8s.io/v1\",\"kind\":\"VolumeSnapshot\",\"metadata\":{\"annotations\":{},\"name\":\"xl-release-snapshot\",\"namespace\":\"default\"},\"spec\":{\"persistentVolumeClaimName\":\"xlr-data-pvc\"}}\n"}, OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.VolumeSnapshotSpec{PersistentVolumeClaimName:"xlr-data-pvc", SnapshotDataName:"k8s-volume-snapshot-dd0c3a0d-139a-11e9-a875-467fb97678b7"}, Status:v1.VolumeSnapshotStatus{CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Conditions:[]v1.VolumeSnapshotCondition{v1.VolumeSnapshotCondition{Type:"Ready", Status:"True", LastTransitionTime:v1.Time{Time:time.Time{wall:0xbf056976a38b90b7, ext:18032657942280, loc:(*time.Location)(0x2a17900)}}, Reason:"", Message:"Snapshot created successfully"}}}}] -I0108 23:12:26.617060 1 snapshot-controller.go:197] [CONTROLLER] OnUpdate oldObj: v1.VolumeSnapshotSpec{PersistentVolumeClaimName:"xlr-data-pvc", SnapshotDataName:""} -I0108 23:12:26.617102 1 snapshot-controller.go:198] [CONTROLLER] OnUpdate newObj: v1.VolumeSnapshotSpec{PersistentVolumeClaimName:"xlr-data-pvc", SnapshotDataName:"k8s-volume-snapshot-dd0c3a0d-139a-11e9-a875-467fb97678b7"} -I0108 23:12:26.617118 1 desired_state_of_world.go:76] Adding new snapshot to desired state of world: default/xl-release-snapshot-dc804d0d-139a-11e9-9561-005056949728 -I0108 23:12:26.617449 1 snapshotter.go:202] In waitForSnapshot: snapshot default/xl-release-snapshot-dc804d0d-139a-11e9-9561-005056949728 snapshot data k8s-volume-snapshot-dd0c3a0d-139a-11e9-a875-467fb97678b7 -I0108 23:12:26.620951 1 snapshotter.go:241] waitForSnapshot: Snapshot default/xl-release-snapshot-dc804d0d-139a-11e9-9561-005056949728 created successfully. Adding it to Actual State of World. -I0108 23:12:26.620991 1 actual_state_of_world.go:74] Adding new snapshot to actual state of world: default/xl-release-snapshot-dc804d0d-139a-11e9-9561-005056949728 -I0108 23:12:26.621005 1 snapshotter.go:526] createSnapshot: Snapshot default/xl-release-snapshot-dc804d0d-139a-11e9-9561-005056949728 created successfully. -I0109 00:11:54.211526 1 snapshot-controller.go:197] [CONTROLLER] OnUpdate oldObj: v1.VolumeSnapshotSpec{PersistentVolumeClaimName:"xlr-data-pvc", SnapshotDataName:"k8s-volume-snapshot-dd0c3a0d-139a-11e9-a875-467fb97678b7"} -I0109 00:11:54.211695 1 snapshot-controller.go:198] [CONTROLLER] OnUpdate newObj: v1.VolumeSnapshotSpec{PersistentVolumeClaimName:"xlr-data-pvc", SnapshotDataName:"k8s-volume-snapshot-dd0c3a0d-139a-11e9-a875-467fb97678b7"} -I0109 01:11:54.211693 1 snapshot-controller.go:197] [CONTROLLER] OnUpdate oldObj: v1.VolumeSnapshotSpec{PersistentVolumeClaimName:"xlr-data-pvc", SnapshotDataName:"k8s-volume-snapshot-dd0c3a0d-139a-11e9-a875-467fb97678b7"} -I0109 01:11:54.211817 1 snapshot-controller.go:198] [CONTROLLER] OnUpdate newObj: v1.VolumeSnapshotSpec{PersistentVolumeClaimName:"xlr-data-pvc", SnapshotDataName:"k8s-volume-snapshot-dd0c3a0d-139a-11e9-a875-467fb97678b7"} -I0109 02:11:54.211890 1 snapshot-controller.go:197] [CONTROLLER] OnUpdate oldObj: v1.VolumeSnapshotSpec{PersistentVolumeClaimName:"xlr-data-pvc", SnapshotDataName:"k8s-volume-snapshot-dd0c3a0d-139a-11e9-a875-467fb97678b7"} -I0109 02:11:54.212010 1 snapshot-controller.go:198] [CONTROLLER] OnUpdate newObj: v1.VolumeSnapshotSpec{PersistentVolumeClaimName:"xlr-data-pvc", SnapshotDataName:"k8s-volume-snapshot-dd0c3a0d-139a-11e9-a875-467fb97678b7"} -I0109 03:11:54.212062 1 snapshot-controller.go:197] [CONTROLLER] OnUpdate oldObj: v1.VolumeSnapshotSpec{PersistentVolumeClaimName:"xlr-data-pvc", SnapshotDataName:"k8s-volume-snapshot-dd0c3a0d-139a-11e9-a875-467fb97678b7"} -I0109 03:11:54.212201 1 snapshot-controller.go:198] [CONTROLLER] OnUpdate newObj: v1.VolumeSnapshotSpec{PersistentVolumeClaimName:"xlr-data-pvc", SnapshotDataName:"k8s-volume-snapshot-dd0c3a0d-139a-11e9-a875-467fb97678b7"} -I0109 04:11:54.212249 1 snapshot-controller.go:197] [CONTROLLER] OnUpdate oldObj: v1.VolumeSnapshotSpec{PersistentVolumeClaimName:"xlr-data-pvc", -``` - -**Resolution:** - -This can be happen due to the stale entries of snapshot and snapshot data. By deleting those entries will resolve this issue. - -### Unable to mount XFS formatted volumes into Pod {#unable-to-mount-xfs-volume} - -I created PVC with FSType as `xfs`. OpenEBS PV is successfully created and I have verified that iSCSI initiator is available on the Application node. But application pod is unable to mount the volume. - -**Troubleshooting:** - -Describing application pod is showing following error: - -```shell hideCopy -Events: - Type Reason Age From Message - ---- ------ ---- ---- ------- - Warning FailedScheduling 58s (x2 over 59s) default-scheduler pod has unbound PersistentVolumeClaims (repeated 4 times) - Normal Scheduled 58s default-scheduler Successfully assigned redis-master-0 to node0 - Normal SuccessfulAttachVolume 58s attachdetach-controller AttachVolume.Attach succeeded for volume "pvc-a036d681-8fd4-11e8-ad96-de1a202c9007" - Normal SuccessfulMountVolume 55s kubelet, node0 MountVolume.SetUp succeeded for volume "default-token-12345" - Warning FailedMount 24s (x4 over 43s) kubelet, node0 MountVolume.WaitForAttach failed for volume "pvc-a036d681-8fd4-11e8-ad96-de1a202c9007" : failed to get any path for iscsi disk, last err seen: -iscsi: failed to sendtargets to portal 10.233.27.8:3260 output: iscsiadm: cannot make connection to 10.233.27.8: Connection refused -iscsiadm: cannot make connection to 10.233.27.8: Connection refused -iscsiadm: cannot make connection to 10.233.27.8: Connection refused -iscsiadm: cannot make connection to 10.233.27.8: Connection refused -iscsiadm: cannot make connection to 10.233.27.8: Connection refused -iscsiadm: cannot make connection to 10.233.27.8: Connection refused -iscsiadm: connection login retries (reopen_max) 5 exceeded -iscsiadm: No portals found -, err exit status 21 - Warning FailedMount 8s (x2 over 17s) kubelet, node0 MountVolume.MountDevice failed for volume "pvc-a036d681-8fd4-11e8-ad96-de1a202c9007" : executable file not found in $PATH -``` - -kubelet had following errors during mount process: - -```shell hideCopy -kubelet[687]: I0315 15:14:54.179765 687 mount_linux.go:453] `fsck` error fsck from util-linux 2.27.1 -kubelet[687]: fsck.ext2: Bad magic number in super-block while trying to open /dev/sdn -kubelet[687]: /dev/sdn: -kubelet[687]: The superblock could not be read or does not describe a valid ext2/ext3/ext4 -kubelet[687]: filesystem. If the device is valid and it really contains an ext2/ext3/ext4 -``` - -And dmesg was showing errors like: - -```shell hideCopy -[5985377.220132] XFS (sdn): Invalid superblock magic number -[5985377.306931] XFS (sdn): Invalid superblock magic number -``` - -**Resolution:** - -This can happen due to `xfs_repair` failure on the application node. Make sure that the application node has `xfsprogs` package installed. - -``` -apt install xfsprogs -``` - -### Unable to create or delete a PVC {#unable-to-create-or-delete-a-pvc} - -User is unable to create a new PVC or delete an existing PVC. While doing any of these operation, the following error is coming on the PVC. - -```shell hideCopy -Error from server (InternalError): Internal error occurred: failed calling webhook "admission-webhook.openebs.io": Post https://admission-server-svc.openebs.svc:443/validate?timeout=30s: Bad Gateway -``` - -**Workaround:** - -When a user creates or deletes a PVC, there are validation triggers and a request has been intercepted by the admission webhook controller after authentication/authorization from kube-apiserver. -By default admission webhook service has been configured to 443 port and the error above suggests that either port 443 is not allowed to use in cluster or admission webhook service has to be allowed in k8s cluster Proxy settings. - -User is unable to create a new PVC or delete an existing PVC. While doing any of these operation, the following error is coming on the PVC. - -```shell hideCopy -Error from server (InternalError): Internal error occurred: failed calling webhook "admission-webhook.openebs.io": Post https://admission-server-svc.openebs.svc:443/validate?timeout=30s: Bad Gateway -``` - -**Workaround:** - -When a user creates or deletes a PVC, there are validation triggers and a request has been intercepted by the admission webhook controller after authentication/authorization from kube-apiserver. -By default admission webhook service has been configured to 443 port and the error above suggests that either port 443 is not allowed to use in cluster or admission webhook service has to be allowed in k8s cluster Proxy settings. - -### Unable to provision OpenEBS volume on DigitalOcean {#unable-to-provision-openebs-volume-on-DigitalOcean} - -User is unable to provision cStor or jiva volume on DigitalOcean, encountering error thrown from iSCSI PVs: - -```shell hideCopy -MountVolume.WaitForAttach failed for volume “pvc-293d3560-a5c3–41d5–8911–67f33115b8ee” : executable file not found in $PATH -``` - -**Resolution :** - -To avoid this issue, the Kubelet Service needs to be updated to mount the required packages to establish iSCSI connection to the target. Kubelet Service on all the nodes in the cluster should be updated. - -:::info -The exact mounts may vary depending on the OS. -The following steps have been verified on: - -1. Digital Ocean Kubernetes Release: 1.15.3-do.2 -2. Nodes running OS Debian Release: 9.11 - -::: - -Add the below lines (volume mounts) to the file on each of the nodes: - -``` -/etc/systemd/system/kubelet.service -``` - -``` --v /sbin/iscsiadm:/usr/bin/iscsiadm \ --v /lib/x86_64-linux-gnu/libisns-nocrypto.so.0:/lib/x86_64-linux-gnu/libisns-nocrypto.so.0 \ -``` - -**Restart the kubelet service using the following commands:** - -``` -systemctl daemon-reload -service kubelet restart -``` - -To know more about provisioning cStor volume on DigitalOcean [click here](/user-guides/prerequisites#digitalocean). - -### Persistent volumes indefinitely remain in pending state {#persistent-volumes-indefinitely-remain-in-pending-state} - -If users have a strict firewall setup on their Kubernetes nodes, the provisioning of a PV from a storageclass backed by a cStor storage pool may fail. The pool can be created without any issue and even the storage class is created, but the PVs may stay in pending state indefinitely. - -The output from the `openebs-provisioner` might look as follows: - -``` -$ kubectl -n openebs logs openebs-provisioner-796dc9d598-k86qn -... -I1117 13:12:43.103813 1 volume.go:73] OpenEBS volume provisioner namespace openebs -I1117 13:12:43.109157 1 leaderelection.go:187] attempting to acquire leader lease openebs/openebs.io-provisioner-iscsi... -I1117 13:12:43.117628 1 leaderelection.go:196] successfully acquired lease openebs/openebs.io-provisioner-iscsi -I1117 13:12:43.117999 1 event.go:221] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"openebs", Name:"openebs.io-provisioner-iscsi", UID:"09e04e2b-302a-454d-a160-fa384cbc69fe", APIVersion:"v1", ResourceVersion:"1270", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' openebs-provisioner-796dc9d598-k86qn_f0833d66-093b-11ea-a950-0a580a2a0009 became leader -I1117 13:12:43.122149 1 controller.go:636] Starting provisioner controller openebs.io/provisioner-iscsi_openebs-provisioner-796dc9d598-k86qn_f0833d66-093b-11ea-a950-0a580a2a0009! -I1117 13:12:43.222583 1 controller.go:685] Started provisioner controller openebs.io/provisioner-iscsi_openebs-provisioner-796dc9d598-k86qn_f0833d66-093b-11ea-a950-0a580a2a0009! -I1117 13:17:11.170266 1 controller.go:991] provision "default/mongodb" class "openebs-storageclass-250gb": started -I1117 13:17:11.177260 1 event.go:221] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"mongodb", UID:"a764b1c0-105f-4f7c-a32d-88275622cb15", APIVersion:"v1", ResourceVersion:"2375", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/mongodb" -E1117 13:17:41.177346 1 volume.go:164] Error when connecting to maya-apiserver Get http://10.43.83.204:5656/latest/volumes/pvc-a764b1c0-105f-4f7c-a32d-88275622cb15: dial tcp 10.43.83.204:5656: i/o timeout -E1117 13:17:41.177446 1 cas_provision.go:111] Unexpected error occurred while trying to read the volume: Get http://10.43.83.204:5656/latest/volumes/pvc-a764b1c0-105f-4f7c-a32d-88275622cb15: dial tcp 10.43.83.204:5656: i/o timeout -W1117 13:17:41.177555 1 controller.go:750] Retrying syncing claim "default/mongodb" because failures 0 < threshold 15 -E1117 13:17:41.177620 1 controller.go:765] error syncing claim "default/mongodb": failed to provision volume with StorageClass "openebs-storageclass-250gb": Get http://10.43.83.204:5656/latest/volumes/pvc-a764b1c0-105f-4f7c-a32d-88275622cb15: dial tcp 10.43.83.204:5656: i/o timeout -... -``` - -**Workaround:** - -This issue has currently only been observed, if the underlying node uses a network bridge and if the setting `net.bridge.bridge-nf-call-iptables=1` in the `/etc/sysctl.conf` is present. The aforementioned setting is required in some Kubernetes installations, such as the Rancher Kubernetes Engine (RKE). - -To avoid this issue, open the port `5656/tcp` on the nodes that run the OpenEBS API pod. Alternatively, removing the network bridge _might_ work. - -## See Also: - -[FAQs](/docs/additional-info/faqs) [Seek support or help](/docs/introduction/community) [Latest release notes](/docs/introduction/releases) diff --git a/docs/main/user-guides/local-storage-user-guide/additional-information/alphafeatures.md b/docs/main/user-guides/local-storage-user-guide/additional-information/alphafeatures.md index 4fd4e858f..d36338767 100644 --- a/docs/main/user-guides/local-storage-user-guide/additional-information/alphafeatures.md +++ b/docs/main/user-guides/local-storage-user-guide/additional-information/alphafeatures.md @@ -26,13 +26,13 @@ Upgrade is not supported for features in Alpha version. OpenEBS is developing a kubectl plugin for openebs called `openebsctl` that can help perform administrative tasks on OpenEBS volumes and pools. -For additional details and detailed instructions on how to get started with OpenEBS CLI, see [here](https://github.com/openebs/openebsctl). +Refer [openebsctl](https://github.com/openebs/openebsctl) for more information and detailed instructions on how to get started with OpenEBS CLI. ## OpenEBS Monitoring Add-on OpenEBS is developing a monitoring add-on package that can be installed via helm for setting up a default prometheus, grafana, and alert manager stack. The package also will include default service monitors, dashboards, and alert rules. -For additional details and detailed instructions on how to get started with OpenEBS Monitoring Add-on, see [here](https://github.com/openebs/monitoring). +Refer [Monitoring](https://github.com/openebs/monitoring) for more information and detailed instructions on how to get started with OpenEBS Monitoring Add-on. ## Data Populator @@ -43,6 +43,4 @@ The Data populator can be used to load seed data into a Kubernetes persistent vo 1. Decommissioning of a node in the cluster: In scenarios where a Kubernetes node needs to be decommissioned whether for upgrade or maintenance, a data populator can be used to migrate the data saved in the Local Storage (a.k.a Local Engine) of the node, that has to be decommissioned. 2. Loading seed data to Kubernetes volumes: Data populator can be used to scale applications without using read-write many operation. The application can be pre-populated with the static content available in an existing PV. -To get more details about Data Populator, see [here](https://github.com/openebs/data-populator#data-populator). - -For instructions on the installation and usage of Data Populator, see [here](https://github.com/openebs/data-populator#install). +Refer [Data Populator](https://github.com/openebs/data-populator#data-populator) for instructions on the installation and usage of Data Populator. diff --git a/docs/main/user-guides/local-storage-user-guide/local-pv-hostpath/hostpath-deployment.md b/docs/main/user-guides/local-storage-user-guide/local-pv-hostpath/hostpath-deployment.md index 8e84bf627..f396c4854 100644 --- a/docs/main/user-guides/local-storage-user-guide/local-pv-hostpath/hostpath-deployment.md +++ b/docs/main/user-guides/local-storage-user-guide/local-pv-hostpath/hostpath-deployment.md @@ -10,7 +10,7 @@ description: This section explains the instructions to deploy an application for This section explains the instructions to deploy an application for the OpenEBS Local Persistent Volumes (PV) backed by Hostpath. -For deployment instructions, see [here](../../../quickstart-guide/deploy-a-test-application.md). +Refer to the [Deploy an Application documentation](../../../quickstart-guide/installation.md) for deployment instructions. ## Cleanup diff --git a/docs/main/user-guides/local-storage-user-guide/local-pv-hostpath/hostpath-installation.md b/docs/main/user-guides/local-storage-user-guide/local-pv-hostpath/hostpath-installation.md index 91d5baefd..59b24b3bb 100644 --- a/docs/main/user-guides/local-storage-user-guide/local-pv-hostpath/hostpath-installation.md +++ b/docs/main/user-guides/local-storage-user-guide/local-pv-hostpath/hostpath-installation.md @@ -49,7 +49,7 @@ services: ## Installation -For installation instructions, see [here](../../../quickstart-guide/installation.md). +Refer to the [OpenEBS Installation documentation](../../../quickstart-guide/installation.md) to install Local PV Hostpath. ## Support diff --git a/docs/main/user-guides/local-storage-user-guide/local-pv-lvm/lvm-configuration.md b/docs/main/user-guides/local-storage-user-guide/local-pv-lvm/lvm-configuration.md index 5079834fa..ac5dad17c 100644 --- a/docs/main/user-guides/local-storage-user-guide/local-pv-lvm/lvm-configuration.md +++ b/docs/main/user-guides/local-storage-user-guide/local-pv-lvm/lvm-configuration.md @@ -29,7 +29,7 @@ parameters: provisioner: local.csi.openebs.io ``` -See [storageclasses](https://github.com/openebs/lvm-localpv/blob/develop/docs/storageclasses.md) to know all the supported parameters for Local PV LVM. +Refer [Storage Classes](https://github.com/openebs/lvm-localpv/blob/develop/docs/storageclasses.md) to know all the supported parameters for Local PV LVM. ## StorageClass Parameters Conformance Matrix @@ -182,7 +182,7 @@ If VolumeWeighted scheduler is used, then the driver will pick the node containi ### AllowVolumeExpansion (Optional) -Users can expand the volumes only when the `allowVolumeExpansion` field is set to true in storageclass. If a field is unspecified, then volume expansion is not supported. For more information about expansion workflow click [here](https://github.com/openebs/lvm-localpv/blob/HEAD/design/lvm/resize_workflow.md#lvm-localpv-volume-expansion). +Users can expand the volumes only when the `allowVolumeExpansion` field is set to true in storageclass. If a field is unspecified, then volume expansion is not supported. Refer [Volume Expansion](https://github.com/openebs/lvm-localpv/blob/HEAD/design/lvm/resize_workflow.md#lvm-localpv-volume-expansion) for more information about expansion workflow. ```yaml apiVersion: storage.k8s.io/v1 kind: StorageClass @@ -197,7 +197,7 @@ parameters: ### MountOptions (Optional) -Volumes that are provisioned via Local PV LVM will use the mount options specified in storageclass during volume mounting time inside an application. If a field is unspecified/specified, `-o default` option will be added to mount the volume. See [here](https://github.com/openebs/lvm-localpv/blob/develop/design/lvm/storageclass-parameters/mount_options.md) for more information about mount options workflow. +Volumes that are provisioned via Local PV LVM will use the mount options specified in storageclass during volume mounting time inside an application. If a field is unspecified/specified, `-o default` option will be added to mount the volume. Refer [Mount Options](https://github.com/openebs/lvm-localpv/blob/develop/design/lvm/storageclass-parameters/mount_options.md) for more information about mount options workflow. :::note Mount options are not validated. If mount options are invalid, then volume mount fails. @@ -221,7 +221,7 @@ Local PV LVM storageclass supports various parameters for different use cases. T - #### FsType (Optional) - Admin can specify filesystem in storageclass. Local PV LVM CSI-Driver will format block device with specified filesystem and mount in the application pod. If fsType is not specified defaults to `ext4` filesystem. See [here](https://github.com/openebs/lvm-localpv/blob/develop/design/lvm/storageclass-parameters/fs_type.md) for more information about filesystem type workflow. + Admin can specify filesystem in storageclass. Local PV LVM CSI-Driver will format block device with specified filesystem and mount in the application pod. If fsType is not specified defaults to `ext4` filesystem. Refer [FsType](https://github.com/openebs/lvm-localpv/blob/develop/design/lvm/storageclass-parameters/fs_type.md) for more information about filesystem type workflow. ```yaml apiVersion: storage.k8s.io/v1 kind: StorageClass @@ -237,7 +237,7 @@ Local PV LVM storageclass supports various parameters for different use cases. T - #### Shared (Optional) - Local PV LVM volume mount points can be shared among the multiple pods on the same node. Applications that can share the volume can set the value of `shared` parameter to yes. See [here](https://github.com/openebs/lvm-localpv/blob/develop/design/lvm/storageclass-parameters/shared.md) for more information about workflow of share volume. + Local PV LVM volume mount points can be shared among the multiple pods on the same node. Applications that can share the volume can set the value of `shared` parameter to yes. Refer [Shared Volume](https://github.com/openebs/lvm-localpv/blob/develop/design/lvm/storageclass-parameters/shared.md) for more information about workflow of shared volume. ```yaml apiVersion: storage.k8s.io/v1 kind: StorageClass @@ -252,7 +252,7 @@ Local PV LVM storageclass supports various parameters for different use cases. T - #### vgpattern (Must parameter if volgroup is not provided, otherwise this is optional) - vgpattern specifies the regular expression for the volume groups on node from which the volumes can be created. The *vgpattern* is the must argument if `volgroup` parameter is not provided in the storageclass. Here, in this case, the driver will pick the volume groups matching the vgpattern with enough free capacity to accommodate the volume and will use the one which has the largest capacity available for provisioning the volume. See [here](https://github.com/openebs/lvm-localpv/blob/develop/design/lvm/storageclass-parameters/vg_pattern.md) for more information about vgpattern workflow. + vgpattern specifies the regular expression for the volume groups on node from which the volumes can be created. The *vgpattern* is the must argument if `volgroup` parameter is not provided in the storageclass. Here, in this case, the driver will pick the volume groups matching the vgpattern with enough free capacity to accommodate the volume and will use the one which has the largest capacity available for provisioning the volume. Refer [VG Pattern](https://github.com/openebs/lvm-localpv/blob/develop/design/lvm/storageclass-parameters/vg_pattern.md) for more information about vgpattern workflow. ```yaml apiVersion: storage.k8s.io/v1 @@ -265,9 +265,11 @@ Local PV LVM storageclass supports various parameters for different use cases. T vgpattern: "lvmvg.*" ## vgpattern specifies pattern of lvm volume group name ``` - if `volgroup` and `vgpattern` both the parameters are defined in the storageclass then `volgroup` will get higher priority and the driver will use that to provision to the volume. + If `volgroup` and `vgpattern` both the parameters are defined in the storageclass then `volgroup` will get higher priority and the driver will use that to provision to the volume. - **Note:** Please note that either volgroup or vgpattern should be present in the storageclass parameters to make the provisioning successful. + :::note + Either `volgroup` or `vgpattern` should be present in the storageclass parameters to make the provisioning successful. + ::: - #### Volgroup (Must parameter if vgpattern is not provided, otherwise this is optional) @@ -290,7 +292,7 @@ Local PV LVM storageclass supports various parameters for different use cases. T - #### ThinProvision (Optional) - For creating a thin-provisioned volume, use the thinProvision parameter in the storage class. Its allowed values are: "yes" and "no". If we do not set the thinProvision parameter by default its value will be `no` and it will work as thick provisioned volumes. See [here](https://github.com/openebs/lvm-localpv/blob/develop/design/lvm/storageclass-parameters/thin_provision.md) for more details about thinProvisioned workflow. + For creating a thin-provisioned volume, use the thinProvision parameter in the storage class. Its allowed values are: "yes" and "no". If we do not set the thinProvision parameter by default its value will be `no` and it will work as thick provisioned volumes. Refer [Thin Provisioning](https://github.com/openebs/lvm-localpv/blob/develop/design/lvm/storageclass-parameters/thin_provision.md) for more details about thinProvisioned workflow. ```yaml apiVersion: storage.k8s.io/v1 @@ -332,7 +334,7 @@ parameters: volumeBindingMode: WaitForFirstConsumer ## It can also replaced by Immediate volume binding mode depending on the use case. ``` - See [here](https://github.com/openebs/lvm-localpv/blob/develop/design/lvm/storageclass-parameters/volume_binding_mode.md) for more details about VolumeBindingMode. + Refer [StorageClass VolumeBindingMode](https://github.com/openebs/lvm-localpv/blob/develop/design/lvm/storageclass-parameters/volume_binding_mode.md) for more details about VolumeBindingMode. #### Reclaim Policy (Optional) @@ -351,7 +353,7 @@ parameters: reclaimPolicy: Delete ## Reclaim policy can be specified here. It also accepts Retain ``` -See [here](https://github.com/openebs/lvm-localpv/blob/develop/design/lvm/storageclass-parameters/reclaim_policy.md) for more details about the reclaim policy. +Refer [StorageClass Volume Reclaim Policy](https://github.com/openebs/lvm-localpv/blob/develop/design/lvm/storageclass-parameters/reclaim_policy.md) for more details about the reclaim policy. ### StorageClass with Custom Node Labels @@ -419,7 +421,7 @@ spec: - test2 ``` -If you want to change topology keys, just a set new env(ALLOWED_TOPOLOGIES). See [FAQs](./faq.md#1-how-to-add-custom-topology-key) for more details. +If you want to change topology keys, just a set new env(ALLOWED_TOPOLOGIES). Refer [FAQs](./faq.md#1-how-to-add-custom-topology-key) for more details. ``` $ kubectl edit ds -n kube-system openebs-lvm-node @@ -458,7 +460,7 @@ allowedTopologies: Here, the volumes will be provisioned on the nodes that have label “openebs.io/lvmvg” set as “nvme”. - See [here](https://github.com/openebs/lvm-localpv/blob/develop/design/lvm/storageclass-parameters/allowed_topologies.md) for more details about topology. + Refer [Allowed Topologies](https://github.com/openebs/lvm-localpv/blob/develop/design/lvm/storageclass-parameters/allowed_topologies.md) for more details about topology. #### VolumeGroup Availability diff --git a/docs/main/user-guides/local-storage-user-guide/local-pv-lvm/lvm-deployment.md b/docs/main/user-guides/local-storage-user-guide/local-pv-lvm/lvm-deployment.md index c911f0405..e865b869f 100644 --- a/docs/main/user-guides/local-storage-user-guide/local-pv-lvm/lvm-deployment.md +++ b/docs/main/user-guides/local-storage-user-guide/local-pv-lvm/lvm-deployment.md @@ -117,7 +117,7 @@ Check the provisioned volumes on the node, we need to run pvscan --cache command **AccessMode** -LVM-LocalPV supports only ReadWriteOnce access mode i.e. volume can be mounted as read-write by a single node. AccessMode is a required field, if the field is unspecified then it will lead to a creation error. See [here](https://github.com/openebs/lvm-localpv/blob/develop/design/lvm/persistent-volume-claim/access_mode.md) for more information about the access modes workflow. +LVM-LocalPV supports only ReadWriteOnce access mode i.e. volume can be mounted as read-write by a single node. AccessMode is a required field, if the field is unspecified then it will lead to a creation error. Refer [Access Modes](https://github.com/openebs/lvm-localpv/blob/develop/design/lvm/persistent-volume-claim/access_mode.md) for more information about the access modes workflow. ``` kind: PersistentVolumeClaim @@ -135,7 +135,7 @@ spec: **StorageClassName** -LVM CSI-Driver supports dynamic provision of volume for the PVCs referred to as LVM storageclass. StorageClassName is a required field, if the field is unspecified then it will lead to provision error. See [here](https://github.com/openebs/lvm-localpv/blob/develop/design/lvm/persistent-volume-claim/storage_class.md) for more information about the dynamic provisioning workflow. +LVM CSI-Driver supports dynamic provision of volume for the PVCs referred to as LVM storageclass. StorageClassName is a required field, if the field is unspecified then it will lead to provision error. Refer [StorageClass Reference](https://github.com/openebs/lvm-localpv/blob/develop/design/lvm/persistent-volume-claim/storage_class.md) for more information about the dynamic provisioning workflow. ``` kind: PersistentVolumeClaim @@ -153,7 +153,7 @@ spec: **Capacity Resource** -Admin/User can specify the desired capacity for LVM volume. CSI-Driver will provision a volume if the underlying volume group has requested capacity available else provisioning volume will be errored. StorageClassName is a required field, if the field is unspecified then it will lead to provisioning errors. See [here](https://github.com/openebs/lvm-localpv/blob/develop/design/lvm/persistent-volume-claim/capacity_resource.md) for more information about the workflows. +Admin/User can specify the desired capacity for LVM volume. CSI-Driver will provision a volume if the underlying volume group has requested capacity available else provisioning volume will be errored. StorageClassName is a required field, if the field is unspecified then it will lead to provisioning errors. Refer [Resource Request](https://github.com/openebs/lvm-localpv/blob/develop/design/lvm/persistent-volume-claim/capacity_resource.md) for more information about the workflows. ``` kind: PersistentVolumeClaim @@ -177,7 +177,7 @@ Block (Block mode can be used in a case where the application itself maintains f Filesystem (Application which requires filesystem as a prerequisite) :::note -If unspecified defaults to Filesystem mode. See [here](https://github.com/openebs/lvm-localpv/blob/develop/design/lvm/persistent-volume-claim/volume_mode.md) for more information about workflows. +If unspecified defaults to Filesystem mode. Refer [Volume Mode](https://github.com/openebs/lvm-localpv/blob/develop/design/lvm/persistent-volume-claim/volume_mode.md) for more information about workflows. ::: ``` @@ -197,7 +197,7 @@ spec: **Selectors (Optional)** -Users can bind any of the retained LVM volumes to the new PersistentVolumeClaim object via the selector field. If the selector and [volumeName](https://github.com/openebs/lvm-localpv/blob/develop/docs/persistentvolumeclaim.md#volumename-optional) fields are unspecified then the LVM CSI driver will provision new volume. If the volume selector is specified then request will not reach to local pv driver. This is a use case of pre-provisioned volume. See [here](https://github.com/openebs/lvm-localpv/blob/develop/design/lvm/persistent-volume-claim/selector.md) for more information about the workflows. +Users can bind any of the retained LVM volumes to the new PersistentVolumeClaim object via the selector field. If the selector and [volumeName](https://github.com/openebs/lvm-localpv/blob/develop/docs/persistentvolumeclaim.md#volumename-optional) fields are unspecified then the LVM CSI driver will provision new volume. If the volume selector is specified then request will not reach to local pv driver. This is a use case of pre-provisioned volume. Refer [Volume Selector](https://github.com/openebs/lvm-localpv/blob/develop/design/lvm/persistent-volume-claim/selector.md) for more information about the workflows. Follow the below steps to specify selector on PersistentVolumeClaim: @@ -251,7 +251,7 @@ pvc-8376b776-75f9-4786-8311-f8780adfabdb 6Gi RWO Retain **VolumeName (Optional)** -VolumeName can be used to bind PersistentVolumeClaim(PVC) to retained PersistentVolume(PV). When VolumeName is specified K8s will ignore [selector field](https://github.com/openebs/lvm-localpv/blob/develop/docs/persistentvolumeclaim.md#selectors-optional). If volumeName field is specified Kubernetes will try to bind to specified volume(It will help to create claims for pre provisioned volume). If volumeName is unspecified then CSI driver will try to provision new volume. See [here](https://github.com/openebs/lvm-localpv/blob/develop/design/lvm/persistent-volume-claim/volume_name.md) for more information about the workflows. +VolumeName can be used to bind PersistentVolumeClaim(PVC) to retained PersistentVolume(PV). When VolumeName is specified K8s will ignore [selector field](https://github.com/openebs/lvm-localpv/blob/develop/docs/persistentvolumeclaim.md#selectors-optional). If volumeName field is specified Kubernetes will try to bind to specified volume(It will help to create claims for pre provisioned volume). If volumeName is unspecified then CSI driver will try to provision new volume. Refer [Volume Name](https://github.com/openebs/lvm-localpv/blob/develop/design/lvm/persistent-volume-claim/volume_name.md) for more information about the workflows. :::note Before creating PVC make retained/preprovisioned PersistentVolume Available by removing claimRef on PersistentVolume. diff --git a/docs/main/user-guides/local-storage-user-guide/local-pv-lvm/lvm-installation.md b/docs/main/user-guides/local-storage-user-guide/local-pv-lvm/lvm-installation.md index b8376623f..ccba608d1 100644 --- a/docs/main/user-guides/local-storage-user-guide/local-pv-lvm/lvm-installation.md +++ b/docs/main/user-guides/local-storage-user-guide/local-pv-lvm/lvm-installation.md @@ -30,12 +30,14 @@ Create the Volume group on all the nodes, which will be used by the LVM Driver f ``` sudo pvcreate /dev/loop0 -sudo vgcreate lvmvg /dev/loop0 ## here lvmvg is the volume group name to be created +sudo vgcreate lvmvg /dev/loop0 ``` +In the above command, `lvmvg` is the volume group name to be created. + ## Installation -For installation instructions, see [here](../../../quickstart-guide/installation.md). +Refer to the [OpenEBS Installation documentation](../../../quickstart-guide/installation.md) to install Local PV LVM. ## Support diff --git a/docs/main/user-guides/local-storage-user-guide/local-pv-zfs/advanced-operations/zfs-backup-restore.md b/docs/main/user-guides/local-storage-user-guide/local-pv-zfs/advanced-operations/zfs-backup-restore.md index 652419d6e..ea53f9d6b 100644 --- a/docs/main/user-guides/local-storage-user-guide/local-pv-zfs/advanced-operations/zfs-backup-restore.md +++ b/docs/main/user-guides/local-storage-user-guide/local-pv-zfs/advanced-operations/zfs-backup-restore.md @@ -12,7 +12,7 @@ description: This section talks about the advanced operations that can be perfor ## Prerequisites -You should have installed the Local PV ZFS 1.0.0 or later version for the Backup and Restore, see [here](https://github.com/openebs/zfs-localpv/blob/develop/README.md) for the steps to install the Local PV ZFS driver. +You should have installed the Local PV ZFS 1.0.0 or later version for the Backup and Restore. Refer [Local PV ZFS](https://github.com/openebs/zfs-localpv/blob/develop/README.md) for the steps to install the Local PV ZFS driver. | Project | Minimum Version | | :--- | :--- | @@ -332,4 +332,4 @@ $ kubectl delete crds -l component=velero ## Reference -See the [velero documentation](https://velero.io/docs/) to find all the supported commands and options for the backup and restore. \ No newline at end of file +Refer to the [Velero documentation](https://velero.io/docs/) to find all the supported commands and options for the backup and restore. \ No newline at end of file diff --git a/docs/main/user-guides/local-storage-user-guide/local-pv-zfs/zfs-configuration.md b/docs/main/user-guides/local-storage-user-guide/local-pv-zfs/zfs-configuration.md index f3369cd1a..10fca8467 100644 --- a/docs/main/user-guides/local-storage-user-guide/local-pv-zfs/zfs-configuration.md +++ b/docs/main/user-guides/local-storage-user-guide/local-pv-zfs/zfs-configuration.md @@ -125,7 +125,7 @@ The provisioner name for ZFS driver is "zfs.csi.openebs.io", we have to use this **Scheduler** -The ZFS driver has its own scheduler which will try to distribute the PV across the nodes so that one node should not be loaded with all the volumes. Currently the driver supports two scheduling algorithms: VolumeWeighted and CapacityWeighted, in which it will try to find a ZFS pool which has less number of volumes provisioned in it or less capacity of volume provisioned out of a pool respectively, from all the nodes where the ZFS pools are available. See [here](https://github.com/openebs/zfs-localpv/blob/HEAD/docs/storageclasses.md#storageclass-with-k8s-scheduler) to know about how to select scheduler via storage-class. +The ZFS driver has its own scheduler which will try to distribute the PV across the nodes so that one node should not be loaded with all the volumes. Currently the driver supports two scheduling algorithms: VolumeWeighted and CapacityWeighted, in which it will try to find a ZFS pool which has less number of volumes provisioned in it or less capacity of volume provisioned out of a pool respectively, from all the nodes where the ZFS pools are available. Refer [StorageClass With k8s Scheduler](https://github.com/openebs/zfs-localpv/blob/HEAD/docs/storageclasses.md#storageclass-with-k8s-scheduler) to know about how to select scheduler via storage-class. Once it can find the node, it will create a PV for that node and also create a ZFSVolume custom resource for the volume with the NODE information. The watcher for this ZFSVolume CR will get all the information for this object and creates a ZFS dataset (zvol) with the given ZFS property on the mentioned node. @@ -326,7 +326,11 @@ parameters: provisioner: zfs.csi.openebs.io ``` -Here, we can mention any fstype we want. As of 0.9 release, the driver supports ext2/3/4, xfs, and btrfs fstypes for which it will create a ZFS Volume. Please note here, if fstype is not provided in the StorageClass, the k8s takes “ext4" as the default fstype. Here also we can provide volblocksize, compression, and dedup properties to create the volume, and the driver will create the volume with all the properties provided in the StorageClass. +Here, we can mention any fstype we want. As of 0.9 release, the driver supports ext2/3/4, xfs, and btrfs fstypes for which it will create a ZFS Volume. + +:::note +If `fstype` is not provided in the StorageClass, the k8s takes “ext4" as the default fstype. Here also we can provide volblocksize, compression, and dedup properties to create the volume, and the driver will create the volume with all the properties provided in the StorageClass. +::: We have the thinprovision option as “yes” in the StorageClass, which means that it does not reserve the space for all the volumes provisioned using this StorageClass. We can set it to “no” if we want to reserve the space for the provisioned volumes. @@ -462,7 +466,7 @@ spec: - test2 ``` -If you want to change topology keys, just set new env(ALLOWED_TOPOLOGIES). See [FAQs](../../../faqs/faqs.md#how-to-add-custom-topology-key-to-local-pv-zfs-driver) for more details. +If you want to change topology keys, just set new env(ALLOWED_TOPOLOGIES). Refer [FAQs](../../../faqs/faqs.md#how-to-add-custom-topology-key-to-local-pv-zfs-driver) for more details. ``` $ kubectl edit ds -n kube-system openebs-zfs-node diff --git a/docs/main/user-guides/local-storage-user-guide/local-pv-zfs/zfs-installation.md b/docs/main/user-guides/local-storage-user-guide/local-pv-zfs/zfs-installation.md index c593ce389..5e18e2c30 100644 --- a/docs/main/user-guides/local-storage-user-guide/local-pv-zfs/zfs-installation.md +++ b/docs/main/user-guides/local-storage-user-guide/local-pv-zfs/zfs-installation.md @@ -34,7 +34,7 @@ If you have the disk (say /dev/sdb), then you can use the below command to creat $ zpool create zfspv-pool /dev/sdb ``` -You can also create mirror or raidz pool as per your need. See [here](https://github.com/openzfs/zfs) for more information. +You can also create mirror or raidz pool as per your need. Refer [Local PV ZFS](https://github.com/openzfs/zfs) for more information. If you do not have the disk, then you can create the zpool on the loopback device which is backed by a sparse file. Use this for testing purpose only. @@ -63,7 +63,7 @@ Configure the [custom topology keys](../../../faqs/faqs.md#how-to-add-custom-top ## Installation -For installation instructions, see [here](../../../quickstart-guide/installation.md). +Refer to the [OpenEBS Installation documentation](../../../quickstart-guide/installation.md) to install Local PV ZFS. ## Support diff --git a/docs/main/user-guides/local-storage-user-guide/localpv-hostpath.md b/docs/main/user-guides/local-storage-user-guide/localpv-hostpath.md deleted file mode 100644 index c42535ed7..000000000 --- a/docs/main/user-guides/local-storage-user-guide/localpv-hostpath.md +++ /dev/null @@ -1,151 +0,0 @@ ---- -id: localpv-hostpath -title: Local PV Hostpath User Guide -keywords: - - OpenEBS Local PV Hostpath - - Local PV Hostpath - - Prerequisites - - Install - - Create StorageClass - - Support -description: This guide will help you to set up and use OpenEBS Local Persistent Volumes backed by Hostpath. ---- - -# Local PV Hostpath User Guide - -This guide will help you to set up and use OpenEBS Local Persistent Volumes backed by Hostpath. - -*OpenEBS Dynamic Local PV provisioner* can create Kubernetes Local Persistent Volumes using a unique Hostpath (directory) on the node to persist data, hereafter referred to as *OpenEBS Local PV Hostpath* volumes. - -*OpenEBS Local PV Hostpath* volumes have the following advantages compared to native Kubernetes hostpath volumes. -- OpenEBS Local PV Hostpath allows your applications to access hostpath via StorageClass, PVC, and PV. This provides you the flexibility to change the PV providers without having to redesign your Application YAML. -- Data protection using the Velero Backup and Restore. -- Protect against hostpath security vulnerabilities by masking the hostpath completely from the application YAML and pod. - -OpenEBS Local PV uses volume topology aware pod scheduling enhancements introduced by [Kubernetes Local Volumes](https://kubernetes.io/docs/concepts/storage/volumes/#local) - -## Prerequisites - -Setup the directory on the nodes where Local PV Hostpaths will be created. This directory will be referred to as `BasePath`. The default location is `/var/openebs/local`. - -`BasePath` can be any of the following: -- A directory on root disk (or `os disk`). (Example: `/var/openebs/local`). -- In the case of bare-metal Kubernetes nodes, a mounted directory using the additional drive or SSD. (Example: An SSD available at `/dev/sdb`, can be formatted with Ext4 and mounted as `/mnt/openebs-local`) -- In the case of cloud or virtual instances, a mounted directory created from attaching an external cloud volume or virtual disk. (Example, in GKE, a Local SSD can be used which will be available at `/mnt/disk/ssd1`.) - -:::note air-gapped environment -If you are running your Kubernetes cluster in an air-gapped environment, make sure the following container images are available in your local repository. -- openebs/localpv-provisioner -- openebs/linux-utils -::: - -:::note Rancher RKE cluster -If you are using the Rancher RKE cluster, you must configure kubelet service with `extra_binds` for `BasePath`. If your `BasePath` is the default directory `/var/openebs/local`, then extra_binds section should have the following details: -``` -services: - kubelet: - extra_binds: - - /var/openebs/local:/var/openebs/local -``` -::: - -## Install - -For installation instructions, see [here](../../quickstart-guide/installation.md). - -## Configuration - -This section will help you to configure Local PV Hostpath. - -### Create StorageClass - -You can skip this section if you would like to use default OpenEBS Local PV Hostpath StorageClass created by OpenEBS. - -The default Storage Class is called `openebs-hostpath` and its `BasePath` is configured as `/var/openebs/local`. - -1. To create your own StorageClass with custom `BasePath`, save the following StorageClass definition as `local-hostpath-sc.yaml` - - ``` - apiVersion: storage.k8s.io/v1 - kind: StorageClass - metadata: - name: local-hostpath - annotations: - openebs.io/cas-type: local - cas.openebs.io/config: | - - name: StorageType - value: hostpath - - name: BasePath - value: /var/local-hostpath - provisioner: openebs.io/local - reclaimPolicy: Delete - volumeBindingMode: WaitForFirstConsumer - ``` - #### (Optional) Custom Node Labelling - - In Kubernetes, Hostpath LocalPV identifies nodes using labels such as `kubernetes.io/hostname=`. However, these default labels might not ensure each node is distinct across the entire cluster. To solve this, you can make custom labels. As an admin, you can define and set these labels when configuring a **StorageClass**. Here's a sample storage class: - - ``` - apiVersion: storage.k8s.io/v1 - kind: StorageClass - metadata: - name: local-hostpath - annotations: - openebs.io/cas-type: local - cas.openebs.io/config: | - - name: StorageType - value: "hostpath" - - name: NodeAffinityLabels - list: - - "openebs.io/custom-node-unique-id" - provisioner: openebs.io/local - volumeBindingMode: WaitForFirstConsumer - - ``` - :::note - Using NodeAffinityLabels does not influence scheduling of the application Pod. Use kubernetes [allowedTopologies](https://github.com/openebs/dynamic-localpv-provisioner/blob/develop/docs/tutorials/hostpath/allowedtopologies.md) to configure scheduling options. - ::: - -2. Edit `local-hostpath-sc.yaml` and update with your desired values for `metadata.name` and `cas.openebs.io/config.BasePath`. - - :::note - If the `BasePath` does not exist on the node, *OpenEBS Dynamic Local PV Provisioner* will attempt to create the directory, when the first Local Volume is scheduled on to that node. You MUST ensure that the value provided for `BasePath` is a valid absolute path. - ::: - -3. Create OpenEBS Local PV Hostpath Storage Class. - ``` - kubectl apply -f local-hostpath-sc.yaml - ``` - -4. Verify that the StorageClass is successfully created. - ``` - kubectl get sc local-hostpath -o yaml - ``` - -## Deploy an Application - -For deployment instructions, see [here](../../quickstart-guide/deploy-a-test-application.md). - -## Cleanup - -Delete the Pod, the PersistentVolumeClaim and StorageClass that you might have created. - -``` -kubectl delete pod hello-local-hostpath-pod -kubectl delete pvc local-hostpath-pvc -kubectl delete sc local-hostpath -``` - -Verify that the PV that was dynamically created is also deleted. -``` -kubectl get pv -``` - -## Support - -If you encounter issues or have a question, file an [Github issue](https://github.com/openebs/openebs/issues/new), or talk to us on the [#openebs channel on the Kubernetes Slack server](https://kubernetes.slack.com/messages/openebs/). - -## See Also - -[Installation](../../quickstart-guide/installation.md) -[Deploy an Application](../../quickstart-guide/deploy-a-test-application.md) diff --git a/docs/main/user-guides/local-storage-user-guide/lvm-localpv.md b/docs/main/user-guides/local-storage-user-guide/lvm-localpv.md deleted file mode 100644 index 5353b1bdb..000000000 --- a/docs/main/user-guides/local-storage-user-guide/lvm-localpv.md +++ /dev/null @@ -1,167 +0,0 @@ ---- -id: lvm-localpv -title: LVM Local PV User Guide -keywords: - - OpenEBS LVM Local PV - - LVM Local PV - - Prerequisites - - Install - - Create StorageClass - - Install verification - - Create a PersistentVolumeClaim -description: This guide will help you to set up and use OpenEBS Local Persistent Volumes backed by LVM Local PV. ---- - -# LVM Local PV User Guide - -This guide will help you to set up and use OpenEBS Local Persistent Volumes backed by LVM Local PV. - -## Prerequisites - -Before installing LVM driver, make sure your Kubernetes Cluster must meet the following prerequisites: - -1. All the nodes must have lvm2 utils installed and the dm-snapshot kernel module loaded. -2. You have access to install RBAC components into kube-system namespace. The OpenEBS LVM driver components are installed in kube-system namespace to allow them to be flagged as system critical components. - -## Setup Volume Group - -Find the disk which you want to use for the LVM, for testing you can use the loopback device - -``` -truncate -s 1024G /tmp/disk.img -sudo losetup -f /tmp/disk.img --show -``` - -Create the Volume group on all the nodes, which will be used by the LVM Driver for provisioning the volumes - -``` -sudo pvcreate /dev/loop0 -sudo vgcreate lvmvg /dev/loop0 ## here lvmvg is the volume group name to be created -``` - -## Installation - -For installation instructions, see [here](../../quickstart-guide/installation.md). - -## Configuration - -This section will help you to configure LVM Local PV. - -### Create StorageClass - -``` -$ cat sc.yaml - -apiVersion: storage.k8s.io/v1 -kind: StorageClass -metadata: - name: openebs-lvmpv -parameters: - storage: "lvm" - volgroup: "lvmvg" -provisioner: local.csi.openebs.io -``` - -Check the doc on [storageclasses](https://github.com/openebs/lvm-localpv/blob/develop/docs/storageclasses.md) to know all the supported parameters for LVM-LocalPV. - -#### VolumeGroup Availability - -If LVM volume group is available on certain nodes only, then make use of topology to tell the list of nodes where we have the volgroup available. As shown in the below storage class, we can use allowedTopologies to describe volume group availability on nodes. - -``` -apiVersion: storage.k8s.io/v1 -kind: StorageClass -metadata: - name: openebs-lvmpv -allowVolumeExpansion: true -parameters: - storage: "lvm" - volgroup: "lvmvg" -provisioner: local.csi.openebs.io -allowedTopologies: -- matchLabelExpressions: - - key: kubernetes.io/hostname - values: - - lvmpv-node1 - - lvmpv-node2 -``` - -The above storage class tells that volume group "lvmvg" is available on nodes lvmpv-node1 and lvmpv-node2 only. The LVM driver will create volumes on those nodes only. - - :::note - The provisioner name for LVM driver is "local.csi.openebs.io", we have to use this while creating the storage class so that the volume provisioning/deprovisioning request can come to LVM driver. - ::: - - ### Create PersistentVolumeClaim - - ``` - $ cat pvc.yaml - -kind: PersistentVolumeClaim -apiVersion: v1 -metadata: - name: csi-lvmpv -spec: - storageClassName: openebs-lvmpv - accessModes: - - ReadWriteOnce - resources: - requests: - storage: 4Gi - ``` - - Create a PVC using the storage class created for the LVM driver. - - ## Deploy the Application - - Create the deployment yaml using the pvc backed by LVM storage. - - ``` - $ cat fio.yaml - -apiVersion: v1 -kind: Pod -metadata: - name: fio -spec: - restartPolicy: Never - containers: - - name: perfrunner - image: openebs/tests-fio - command: ["/bin/bash"] - args: ["-c", "while true ;do sleep 50; done"] - volumeMounts: - - mountPath: /datadir - name: fio-vol - tty: true - volumes: - - name: fio-vol - persistentVolumeClaim: - claimName: csi-lvmpv - ``` - - After the deployment of the application, we can go to the node and see that the lvm volume is being used by the application for reading/writting the data and space is consumed from the LVM. Please note that to check the provisioned volumes on the node, we need to run pvscan --cache command to update the lvm cache and then we can use lvdisplay and all other lvm commands on the node. - - ## Deprovisioning - -To deprovision the volume we can delete the application which is using the volume and then we can go ahead and delete the pv, as part of deletion of pv this volume will also be deleted from the volume group and data will be freed. - -``` -$ kubectl delete -f fio.yaml -pod "fio" deleted -$ kubectl delete -f pvc.yaml -persistentvolumeclaim "csi-lvmpv" deleted -``` - -## Limitation - -Resize of volumes with snapshot is not supported. - -## Support - -If you encounter issues or have a question, file an [Github issue](https://github.com/openebs/openebs/issues/new), or talk to us on the [#openebs channel on the Kubernetes Slack server](https://kubernetes.slack.com/messages/openebs/). - -## See Also - -[Installation](../../quickstart-guide/installation.md) -[Deploy an Application](../../quickstart-guide/deploy-a-test-application.md) diff --git a/docs/main/user-guides/local-storage-user-guide/zfs-localpv.md b/docs/main/user-guides/local-storage-user-guide/zfs-localpv.md deleted file mode 100644 index 00ced2df0..000000000 --- a/docs/main/user-guides/local-storage-user-guide/zfs-localpv.md +++ /dev/null @@ -1,364 +0,0 @@ ---- -id: zfs-localpv -title: ZFS Local PV User Guide -keywords: - - OpenEBS ZFS Local PV - - ZFS Local PV - - Prerequisites - - Install - - Create StorageClass - - Install verification - - Create a PersistentVolumeClaim -description: This guide will help you to set up and use OpenEBS Local Persistent Volumes backed by ZFS Local PV. ---- - -# ZFS Local PV User Guide - -This guide will help you to set up and use OpenEBS Local Persistent Volumes backed by ZFS Local PV. - -## Prerequisites - -Before installing ZFS driver, make sure your Kubernetes Cluster must meet the following prerequisites: - -1. All the nodes must have zfs utils installed. -2. ZPOOL has been setup for provisioning the volume. -3. You have access to install RBAC components into kube-system namespace. The OpenEBS ZFS driver components are installed in kube-system namespace to allow them to be flagged as system critical components. - -## Setup - -Setup -All the node should have zfsutils-linux installed. We should go to the each node of the cluster and install zfs utils: - -``` -$ apt-get install zfsutils-linux -``` - -Go to each node and create the ZFS Pool, which will be used for provisioning the volumes. You can create the Pool of your choice, it can be striped, mirrored or raidz pool. - -If you have the disk(say /dev/sdb) then you can use the below command to create a striped pool : - -``` -$ zpool create zfspv-pool /dev/sdb -``` - -You can also create mirror or raidz pool as per your need. Check https://github.com/openzfs/zfs for more information. - -If you do not have the disk, then you can create the zpool on the loopback device which is backed by a sparse file. Use this for testing purpose only. - -``` -$ truncate -s 100G /tmp/disk.img -$ zpool create zfspv-pool `losetup -f /tmp/disk.img --show` -``` - -Once the ZFS Pool is created, verify the pool via zpool status command, you should see something like this: - -``` -$ zpool status - pool: zfspv-pool - state: ONLINE - scan: none requested -config: - - NAME STATE READ WRITE CKSUM - zfspv-pool ONLINE 0 0 0 - sdb ONLINE 0 0 0 - -errors: No known data errors -``` - -Configure the custom topology keys (if needed). This can be used for many purposes like if we want to create the PV on nodes in a particuler zone or building. We can label the nodes accordingly and use that key in the storageclass for taking the scheduling decesion: - -https://github.com/openebs/zfs-localpv/blob/HEAD/docs/faq.md#6-how-to-add-custom-topology-key - -## Installation - -For installation instructions, see [here](../../quickstart-guide/installation.md). - -## Configuration - -This section will help you to configure ZFS Local PV. - -### Create StorageClass - -``` -apiVersion: storage.k8s.io/v1 -kind: StorageClass -metadata: - name: openebs-zfspv -parameters: - recordsize: "128k" - compression: "off" - dedup: "off" - fstype: "zfs" - poolname: "zfspv-pool" -provisioner: zfs.csi.openebs.io -``` - -The storage class contains the volume parameters like recordsize(should be power of 2), compression, dedup and fstype. You can select what are all parameters you want. In case, ZFS properties paramenters are not provided, the volume will inherit the properties from the ZFS Pool. - -The poolname is the must argument. It should be noted that poolname can either be the root dataset or a child dataset e.g. - -``` -poolname: "zfspv-pool" -poolname: "zfspv-pool/child" -``` - -Also the dataset provided under `poolname` must exist on all the nodes with the name given in the storage class. Check the doc on storageclasses to know all the supported parameters for ZFS-LocalPV - -**ext2/3/4 or xfs or btrfs as FsType** -If we provide fstype as one of ext2/3/4 or xfs or btrfs, the driver will create a ZVOL, which is a blockdevice carved out of ZFS Pool. This blockdevice will be formatted with corresponding filesystem before it's used by the driver. - -:::note -There will be a filesystem layer on top of ZFS volume and applications may not get optimal performance. -::: - -A sample storage class for ext4 fstype is provided below: - -``` -apiVersion: storage.k8s.io/v1 -kind: StorageClass -metadata: - name: openebs-zfspv -parameters: - volblocksize: "4k" - compression: "off" - dedup: "off" - fstype: "ext4" - poolname: "zfspv-pool" -provisioner: zfs.csi.openebs.io -``` - -:::note -We are providing `volblocksize` instead of `recordsize` since we will create a ZVOL, for which we can select the blocksize with which we want to create the block device. Also, note that for ZFS, volblocksize should be power of 2. -::: - -**ZFS as FsType** - -In case if we provide "zfs" as the fstype, the ZFS driver will create ZFS DATASET in the ZFS Pool, which is the ZFS filesystem. Here, there will not be any extra layer between application and storage, and applications can get the optimal performance. - -The sample storage class for ZFS fstype is provided below: - -``` -apiVersion: storage.k8s.io/v1 -kind: StorageClass -metadata: - name: openebs-zfspv -parameters: - recordsize: "128k" - compression: "off" - dedup: "off" - fstype: "zfs" - poolname: "zfspv-pool" -provisioner: zfs.csi.openebs.io -``` - -:::note -We are providing `recordsize` which will be used to create the ZFS datasets, which specifies the maximum block size for files in the zfs file system. The recordsize has to be power of 2 for ZFS datasets. -::: - -**ZPOOL Availability** - -If ZFS pool is available on certain nodes only, then make use of topology to tell the list of nodes where we have the ZFS pool available. As shown in the below storage class, we can use allowedTopologies to describe ZFS pool availability on nodes. - -``` -apiVersion: storage.k8s.io/v1 -kind: StorageClass -metadata: - name: openebs-zfspv -allowVolumeExpansion: true -parameters: - recordsize: "128k" - compression: "off" - dedup: "off" - fstype: "zfs" - poolname: "zfspv-pool" -provisioner: zfs.csi.openebs.io -allowedTopologies: -- matchLabelExpressions: - - key: kubernetes.io/hostname - values: - - zfspv-node1 - - zfspv-node2 -``` - -The above storage class tells that ZFS pool "zfspv-pool" is available on nodes zfspv-node1 and zfspv-node2 only. The ZFS driver will create volumes on those nodes only. - -:::note -The provisioner name for ZFS driver is "zfs.csi.openebs.io", we have to use this while creating the storage class so that the volume provisioning/deprovisioning request can come to ZFS driver. -::: - -**Scheduler** - -The ZFS driver has its own scheduler which will try to distribute the PV across the nodes so that one node should not be loaded with all the volumes. Currently the driver supports two scheduling algorithms: VolumeWeighted and CapacityWeighted, in which it will try to find a ZFS pool which has less number of volumes provisioned in it or less capacity of volume provisioned out of a pool respectively, from all the nodes where the ZFS pools are available. To know about how to select scheduler via storage-class See [this](https://github.com/openebs/zfs-localpv/blob/HEAD/docs/storageclasses.md#storageclass-with-k8s-scheduler). Once it is able to find the node, it will create a PV for that node and also create a ZFSVolume custom resource for the volume with the NODE information. The watcher for this ZFSVolume CR will get all the information for this object and creates a ZFS dataset(zvol) with the given ZFS property on the mentioned node. - -The scheduling algorithm currently only accounts for either the number of ZFS volumes or total capacity occupied from a zpool and does not account for other factors like available cpu or memory while making scheduling decisions. - -So if you want to use node selector/affinity rules on the application pod, or have cpu/memory constraints, kubernetes scheduler should be used. To make use of kubernetes scheduler, you can set the `volumeBindingMode` as `WaitForFirstConsumer` in the storage class. - -This will cause a delayed binding, i.e kubernetes scheduler will schedule the application pod first and then it will ask the ZFS driver to create the PV. - -The driver will then create the PV on the node where the pod is scheduled: - -``` -apiVersion: storage.k8s.io/v1 -kind: StorageClass -metadata: - name: openebs-zfspv -allowVolumeExpansion: true -parameters: - recordsize: "128k" - compression: "off" - dedup: "off" - fstype: "zfs" - poolname: "zfspv-pool" -provisioner: zfs.csi.openebs.io -volumeBindingMode: WaitForFirstConsumer -``` - -:::note -Once a PV is created for a node, application using that PV will always get scheduled to that particular node only, as PV will be sticky to that node. -::: - -The scheduling algorithm by ZFS driver or kubernetes will come into picture only during the deployment time. Once the PV is created, the application can not move anywhere as the data is there on the node where the PV is. - -### Create PersistentVolumeClaim - -``` -kind: PersistentVolumeClaim -apiVersion: v1 -metadata: - name: csi-zfspv -spec: - storageClassName: openebs-zfspv - accessModes: - - ReadWriteOnce - resources: - requests: - storage: 4Gi -``` - -Create a PVC using the storage class created for the ZFS driver. Here, the allocated volume size will be rounded off to the nearest Mi or Gi notation, check the [faq section](../../faqs/faqs.md) for more details. - -If we are using the immediate binding in the storageclass then we can check the kubernetes resource for the corresponding ZFS volume, otherwise in late binding case, we can check the same after pod has been scheduled: - -``` -$ kubectl get zv -n openebs -NAME ZPOOL NODE SIZE STATUS FILESYSTEM AGE -pvc-34133838-0d0d-11ea-96e3-42010a800114 zfspv-pool zfspv-node1 4294967296 Ready zfs 4s -``` - -``` -$ kubectl describe zv pvc-34133838-0d0d-11ea-96e3-42010a800114 -n openebs -Name: pvc-34133838-0d0d-11ea-96e3-42010a800114 -Namespace: openebs -Labels: kubernetes.io/nodename=zfspv-node1 -Annotations: -API Version: openebs.io/v1alpha1 -Kind: ZFSVolume -Metadata: - Creation Timestamp: 2019-11-22T09:49:29Z - Finalizers: - zfs.openebs.io/finalizer - Generation: 1 - Resource Version: 2881 - Self Link: /apis/openebs.io/v1alpha1/namespaces/openebs/zfsvolumes/pvc-34133838-0d0d-11ea-96e3-42010a800114 - UID: 60bc4df2-0d0d-11ea-96e3-42010a800114 -Spec: - Capacity: 4294967296 - Compression: off - Dedup: off - Fs Type: zfs - Owner Node ID: zfspv-node1 - Pool Name: zfspv-pool - Recordsize: 4k - Volume Type: DATASET -Status: - State: Ready -Events: -``` - -The ZFS driver will create a ZFS dataset (or zvol as per fstype in the storageclass) on the node zfspv-node1 for the mentioned ZFS pool and the dataset name will same as PV name. - -Go to the node zfspv-node1 and check the volume: - -``` -$ zfs list -NAME USED AVAIL REFER MOUNTPOINT -zfspv-pool 444K 362G 96K /zfspv-pool -zfspv-pool/pvc-34133838-0d0d-11ea-96e3-42010a800114 96K 4.00G 96K legacy -``` - -## Deploy the Application - -Create the deployment yaml using the pvc backed by ZFS-LocalPV storage. - -``` -apiVersion: v1 -kind: Pod -metadata: - name: fio -spec: - restartPolicy: Never - containers: - - name: perfrunner - image: openebs/tests-fio - command: ["/bin/bash"] - args: ["-c", "while true ;do sleep 50; done"] - volumeMounts: - - mountPath: /datadir - name: fio-vol - tty: true - volumes: - - name: fio-vol - persistentVolumeClaim: - claimName: csi-zfspv -``` - -After the deployment of the application, we can go to the node and see that the zfs volume is being used by the application for reading/writting the data and space is consumed from the ZFS pool. - -## ZFS Property Change - -ZFS Volume Property can be changed like compression on/off can be done by just simply editing the kubernetes resource for the corresponding zfs volume by using below command: - -``` -$ kubectl edit zv pvc-34133838-0d0d-11ea-96e3-42010a800114 -n openebs -``` -You can edit the relevant property like make compression on or make dedup on and save it. This property will be applied to the corresponding volume and can be verified using below command on the node: - -``` -$ zfs get all zfspv-pool/pvc-34133838-0d0d-11ea-96e3-42010a800114 -``` - -## Deprovisioning - -To deprovision the volume we can delete the application which is using the volume and then we can go ahead and delete the pv, as part of deletion of pv this volume will also be deleted from the ZFS pool and data will be freed. - -``` -$ kubectl delete -f fio.yaml -pod "fio" deleted -$ kubectl delete -f pvc.yaml -persistentvolumeclaim "csi-zfspv" deleted -``` - -:::Warning -If you are running kernel ZFS on the same set of nodes, the following two points are recommended: - -- Disable zfs-import-scan.service service that will avoid importing all pools by scanning all the available devices in the system, disabling scan service will avoid importing pools that are not created by kernel. - -- Disabling scan service will not cause harm since zfs-import-cache.service is enabled and it is the best way to import pools by looking at cache file during boot time. - -``` -$ systemctl stop zfs-import-scan.service -$ systemctl disable zfs-import-scan.service -``` - -Always maintain upto date /etc/zfs/zpool.cache while performing operations on zfs pools(zpool set cachefile=/etc/zfs/zpool.cache). - -## Support - -If you encounter issues or have a question, file an [Github issue](https://github.com/openebs/openebs/issues/new), or talk to us on the [#openebs channel on the Kubernetes Slack server](https://kubernetes.slack.com/messages/openebs/). - -## See Also - -[Installation](../../quickstart-guide/installation.md) -[Deploy an Application](../../quickstart-guide/deploy-a-test-application.md) \ No newline at end of file diff --git a/docs/main/user-guides/localpv-device.md b/docs/main/user-guides/localpv-device.md deleted file mode 100644 index db491b798..000000000 --- a/docs/main/user-guides/localpv-device.md +++ /dev/null @@ -1,624 +0,0 @@ ---- -id: localpv-device -title: OpenEBS Local PV Device User Guide -keywords: - - OpenEBS Local PV Device - - Local PV Prerequisites - - OpenEBS Local PV Installation - - Create StorageClass - - Create a PersistentVolumeClaim - - Create Pod to consume OpenEBS Local PV backed by Block Device - - Cleanup - - Backup and Restore - - Troubleshooting -description: This guide will help you to set up and use OpenEBS Local Persistent Volumes backed by Block Devices. ---- - -[![OpenEBS configuration flow](../assets/4-config-sequence.svg)](../assets/4-config-sequence.svg) - -This guide will help you to set up and use OpenEBS Local Persistent Volumes backed by Block Devices. - -*OpenEBS Dynamic Local PV provisioner* can create Kubernetes Local Persistent Volumes using block devices available on the node to persist data, hereafter referred to as *OpenEBS Local PV Device* volumes. - -*OpenEBS Local PV Device* volumes have the following advantages compared to native Kubernetes Local Persistent Volumes. -- Dynamic Volume provisioner as opposed to a Static Provisioner. -- Better management of the Block Devices used for creating Local PVs by OpenEBS NDM. NDM provides capabilities like discovering Block Device properties, setting up Device Filters, metrics collection and ability to detect if the Block Devices have moved across nodes. - -OpenEBS Local PV uses volume topology aware pod scheduling enhancements introduced by [Kubernetes Local Volumes](https://kubernetes.io/docs/concepts/storage/volumes/#local) - -:::tip QUICKSTART - -OpenEBS Local PV Device volumes will be created using the Block Devices available on the node. You can customize which block devices can be used for creating Local PVs by [configuring NDM parameters](#install) and/or by creating new [StorageClass](#create-storageclass). - -If you have OpenEBS already installed, you can create an example pod that persists data to *OpenEBS Local PV Device* with following kubectl commands. -``` -kubectl apply -f https://openebs.github.io/charts/examples/local-device/local-device-pvc.yaml -kubectl apply -f https://openebs.github.io/charts/examples/local-device/local-device-pod.yaml -``` - -Verify using below kubectl commands that example pod is running and is using a OpenEBS Local PV Device. -``` -kubectl get pod hello-local-device-pod -kubectl get pvc local-device-pvc -``` - -For a more detailed walkthrough of the setup, follow along the rest of this document. -::: - -## Minimum Versions - -- Kubernetes 1.12 or higher is required -- OpenEBS 1.0 or higher is required. - -:::note air-gapped environment -If you are running your Kubernetes cluster in an air-gapped environment, make sure the following container images are available in your local repository. -- openebs/localpv-provisioner -- openebs/linux-utils -- openebs/node-disk-manager -- openebs/node-disk-operator -::: - -## Prerequisites - -For provisioning Local PV using the block devices, the Kubernetes nodes should have block devices attached to the nodes. The block devices can optionally be formatted and mounted. - -The block devices can be any of the following: - -- SSD, NVMe or Hard Disk attached to a Kubernetes node (Bare metal server) -- Cloud Provider Disks like EBS or GPD attached to a Kubernetes node (Cloud instances. GKE or EKS) -- Virtual Disks like a vSAN volume or VMDK disk attached to a Kubernetes node (Virtual Machine) - -## Install - -### Customize NDM and Install - -You can skip this section if you have already installed OpenEBS. - -*OpenEBS Dynamic Local Provisioner* uses the Block Devices discovered by NDM to create Local PVs. NDM offers some configurable parameters that can be applied during the OpenEBS Installation. Some key configurable parameters available for NDM are: - -1. Prepare to install OpenEBS by providing custom values for configurable parameters. - - The location of the *OpenEBS Dynamic Local PV provisioner* container image. - ```shell hideCopy - Default value: openebs/provisioner-localpv - YAML specification: spec.image on Deployment(localpv-provisioner) - Helm key: localprovisioner.image - ``` - - - The location of the *OpenEBS NDM DaemonSet* container image. NDM DaemonSet helps with discovering block devices attached to a node and creating Block Device Resources. - ```shell hideCopy - Default value: openebs/node-disk-manager - YAML specification: spec.image on DaemonSet(openebs-ndm) - Helm key: ndm.image - ``` - - - The location of the *OpenEBS NDM Operator* container image. NDM Operator helps with allocating Block Devices to Block Device Claims raised by *OpenEBS Dynamic Local PV Provisioner*. - ```shell hideCopy - Default value: openebs/node-disk-operator - YAML specification: spec.image on Deployment(openebs-ndm-operator) - Helm key: ndmOperator.image - ``` - - - The location of the *Provisioner Helper* container image. *OpenEBS Dynamic Local Provisioner* create a *Provisioner Helper* pod to clean up the data from the block device after the PV has been deleted. - - ```shell hideCopy - Default value: openebs/linux-utils - YAML specification: Environment Variable (CLEANUP_JOB_IMAGE) on Deployment(ndm-operator) - Helm key: helper.image - ``` - - - Specify the list of block devices for which BlockDevice CRs must be created. A comma separated values of path regular expressions can be specified. - ```shell hideCopy - Default value: all - YAML specification: data."node-disk-manager.config".filterconfigs.key["path-filter"].include on ConfigMap(openebs-ndm-config) - Helm key: ndm.filters.includePaths - ``` - - - Specify the list of block devices for which BlockDevice CRs must not be created. A comma separated values of path regular expressions can be specified. - ```shell hideCopy - Default value: "loop,fd0,sr0,/dev/ram,/dev/dm-,/dev/md" - YAML specification: data."node-disk-manager.config".filterconfigs.key["path-filter"].exclude on ConfigMap(openebs-ndm-config) - Helm key: ndm.filters.excludePaths - ``` - -2. You can proceed to install OpenEBS either using kubectl or helm using the steps below. - - - Install using kubectl - - If you would like to change the default values for any of the configurable parameters mentioned in the previous step, download the `openebs-operator.yaml` and make the necessary changes before applying. - ``` - kubectl apply -f https://openebs.github.io/charts/openebs-operator.yaml - ``` - - :::note - If you would like to use only Local PV (hostpath and device), you can install a lite version of OpenEBS using the following command. - - kubectl apply -f https://openebs.github.io/charts/openebs-operator-lite.yaml - kubectl apply -f https://openebs.github.io/charts/openebs-lite-sc.yaml - ::: - - - Install using OpenEBS helm charts - - If you would like to change the default values for any of the configurable parameters mentioned in the previous step, specify each parameter using the `--set key=value[,key=value]` argument to `helm install`. - - ``` - helm repo add openebs https://openebs.github.io/charts - helm repo update - helm install --namespace openebs --name openebs openebs/openebs - ``` - -### (Optional) Block Device Tagging - -You can reserve block devices in the cluster that you would like the *OpenEBS Dynamic Local Provisioner* to pick up some specific block devices available on the node. You can use the NDM Block Device tagging feature to reserve the devices. For example, if you would like Local SSDs on your cluster for running Mongo stateful application. You can tag a few devices in the cluster with a tag named `mongo`. - -``` -kubectl label bd -n openebs blockdevice-0052b132e6c5800139d1a7dfded8b7d7 openebs.io/block-device-tag=mongo -``` - -BlockDeviceSelectors may be used to filter BlockDevices with any label that they may have (e.g. [NDM metaconfigs](https://github.com/openebs/node-disk-manager/pull/618)). [Click here](https://github.com/openebs/dynamic-localpv-provisioner/blob/develop/docs/tutorials/device/blockdeviceselectors.md) for more information. - -## Create StorageClass - -You can skip this section if you would like to use default OpenEBS Local PV Device StorageClass created by OpenEBS. - -The default Storage Class is called `openebs-device`. If the block devices are not formatted, the devices will be formatted with `ext4`. - -1. To create your own StorageClass to customize how Local PV with devices are created. For instance, if you would like to run MongoDB stateful applications with Local PV device, you would want to set the default filesystem as `xfs` and/or also dedicate some devices on node that you want to use for Local PV. Save the following StorageClass definition as `local-device-sc.yaml` - - ``` - apiVersion: storage.k8s.io/v1 - kind: StorageClass - metadata: - name: local-device - annotations: - openebs.io/cas-type: local - cas.openebs.io/config: | - - name: StorageType - value: device - - name: FSType - value: xfs - - name: BlockDeviceSelectors - data: - openebs.io/block-device-tag: "mongo" - provisioner: openebs.io/local - reclaimPolicy: Delete - volumeBindingMode: WaitForFirstConsumer - ``` - :::note - The `volumeBindingMode` MUST ALWAYS be set to `WaitForFirstConsumer`. `volumeBindingMode: WaitForFirstConsumer` instructs Kubernetes to initiate the creation of PV only after Pod using PVC is scheduled to the node. - ::: - - :::note - The `FSType` will take effect only if the underlying block device is not formatted. For instance if the block device is formatted with "Ext4", specifying "XFS" in the storage class will not clear Ext4 and format with XFS. If the block devices are already formatted, you can clear the filesystem information using `wipefs -f -a `. After the filesystem has been cleared, NDM pod on the node needs to be restarted to update the Block Device. - ::: - - #### (Optional) Custom Node Labelling - - In Kubernetes, Device LocalPV identifies nodes using labels such as `kubernetes.io/hostname=`. However, these default labels might not ensure each node is distinct across the entire cluster. To solve this, you can make custom labels. As an admin, you can define and set these labels when configuring a **StorageClass**. Here's a sample storage class: - - ``` - apiVersion: storage.k8s.io/v1 - kind: StorageClass - metadata: - name: local-hostpath - annotations: - openebs.io/cas-type: local - cas.openebs.io/config: | - - name: StorageType - value: "device" - - name: NodeAffinityLabels - list: - - "openebs.io/custom-node-unique-id" - provisioner: openebs.io/local - volumeBindingMode: WaitForFirstConsumer - ``` - :::note - Using NodeAffinityLabels does not influence scheduling of the application Pod. Use kubernetes [allowedTopologies](https://github.com/openebs/dynamic-localpv-provisioner/blob/develop/docs/tutorials/device/allowedtopologies.md) to configure scheduling options. - - ::: - -2. Edit `local-device-sc.yaml` and update with your desired values for: - - - `metadata.name` - - `cas.openebs.io/config.FSType` - - `cas.openebs.io/config.BlockDeviceSelectors` - - :::note - BlockDeviceSelectors support for Local Volumes was introduced in OpenEBS 3.1. The support for BlockDeviceTag was also dropped in v3.1. If you are using BlockDeviceTag with a v3.1 provisioner or newer, you'd need to update your storageClass. Existing volumes will continue to work correctly. - - When specifying the value for BlockDeviceSelectors, you must already have Block Devices on the nodes labelled with the tag. See [Block Device Tagging](#optional-block-device-tagging) - ::: - -3. Create OpenEBS Local PV Device Storage Class. - ``` - kubectl apply -f local-device-sc.yaml - ``` - -4. Verify that the StorageClass is successfully created. - ``` - kubectl get sc local-device -o yaml - ``` - -## Create a PersistentVolumeClaim - -The next step is to create a PersistentVolumeClaim. Pods will use PersistentVolumeClaims to request Device backed Local PV from *OpenEBS Dynamic Local PV provisioner*. - -1. Here is the configuration file for the PersistentVolumeClaim. Save the following PersistentVolumeClaim definition as `local-device-pvc.yaml` - - ``` - kind: PersistentVolumeClaim - apiVersion: v1 - metadata: - name: local-device-pvc - spec: - storageClassName: local-device - accessModes: - - ReadWriteOnce - resources: - requests: - storage: 5G - ``` - -2. Create the PersistentVolumeClaim - - ``` - kubectl apply -f local-device-pvc.yaml - ``` - -3. Look at the PersistentVolumeClaim: - - ``` - kubectl get pvc local-device-pvc - ``` - - The output shows that the `STATUS` is `Pending`. This means PVC has not yet been used by an application pod. The next step is to create a Pod that uses your PersistentVolumeClaim as a volume. - - ```shell hideCopy - NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE - local-device-pvc Pending local-device 31s - ``` - -### Using Raw Block Volume - -By default, Local PV volume will be provisioned with volumeMode as filesystem. If you would like to use it as [Raw Block Volume](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#raw-block-volume-support), specify `spec.volumeMode` as `Block` in the Persistent Volume Claim spec. Here is the configuration file for the PersistentVolumeClaim with Raw Block Volume Support. - - ``` - kind: PersistentVolumeClaim - apiVersion: v1 - metadata: - name: local-device-pvc-block - spec: - storageClassName: local-device - volumeMode: Block - accessModes: - - ReadWriteOnce - resources: - requests: - storage: 5G - ``` - -:::note -Raw Block Volume support was introduced for OpenEBS Local PV OpenEBS 1.5. -::: - - -## Create Pod to consume OpenEBS Local PV backed by Block Device - -1. Here is the configuration file for the Pod that uses Local PV. Save the following Pod definition to `local-device-pod.yaml`. - - ``` - apiVersion: v1 - kind: Pod - metadata: - name: hello-local-device-pod - spec: - volumes: - - name: local-storage - persistentVolumeClaim: - claimName: local-device-pvc - containers: - - name: hello-container - image: busybox - command: - - sh - - -c - - 'while true; do echo "`date` [`hostname`] Hello from OpenEBS Local PV." >> /mnt/store/greet.txt; sleep $(($RANDOM % 5 + 300)); done' - volumeMounts: - - mountPath: /mnt/store - name: local-storage - ``` - - :::note - As the Local PV storage classes use `waitForFirstConsumer`, do not use `nodeName` in the Pod spec to specify node affinity. If `nodeName` is used in the Pod spec, then PVC will remain in `pending` state. For more details refer https://github.com/openebs/openebs/issues/2915. - ::: - -2. Create the Pod: - - ``` - kubectl apply -f local-device-pod.yaml - ``` - -3. Verify that the container in the Pod is running; - - ``` - kubectl get pod hello-local-device-pod - ``` - -4. Verify that the container is using the Local PV Device - ``` - kubectl describe pod hello-local-device-pod - ``` - - The output shows that the Pod is running on `Node: gke-user-helm-default-pool-3a63aff5-1tmf` and using the persistent volume provided by `local-describe-pvc`. - - ```shell hideCopy - Name: hello-local-device-pod - Namespace: default - Priority: 0 - Node: gke-user-helm-default-pool-92abeacf-89nd/10.128.0.16 - Start Time: Thu, 16 Apr 2020 17:56:04 +0000 - ... - Volumes: - local-storage: - Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace) - ClaimName: local-device-pvc - ReadOnly: false - ... - ``` - -5. Look at the PersistentVolumeClaim again to see the details about the dynamically provisioned Local PersistentVolume - ``` - kubectl get pvc local-device-pvc - ``` - - The output shows that the `STATUS` is `Bound`. A new Persistent Volume `pvc-79d25095-eb1f-4028-9843-7824cb82f07f` has been created. - - ```shell hideCopy - NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE - local-device-pvc Bound pvc-79d25095-eb1f-4028-9843-7824cb82f07f 5G RWO local-device 5m56s - ``` - -6. Look at the PersistentVolume details to see where the data is stored. Replace the PVC name with the one that was displayed in the previous step. - ``` - kubectl get pv pvc-79d25095-eb1f-4028-9843-7824cb82f07f -o yaml - ``` - The output shows that the PV was provisioned in response to PVC request `spec.claimRef.name: local-device-pvc`. - - ```shell hideCopy - apiVersion: v1 - kind: PersistentVolume - metadata: - name: pvc-79d25095-eb1f-4028-9843-7824cb82f07f - annotations: - pv.kubernetes.io/provisioned-by: openebs.io/local - ... - spec: - accessModes: - - ReadWriteOnce - capacity: - storage: 5G - claimRef: - apiVersion: v1 - kind: PersistentVolumeClaim - name: local-device-pvc - namespace: default - resourceVersion: "291148" - uid: 79d25095-eb1f-4028-9843-7824cb82f07f - ... - ... - local: - fsType: "" - path: /mnt/disks/ssd0 - nodeAffinity: - required: - nodeSelectorTerms: - - matchExpressions: - - key: kubernetes.io/hostname - operator: In - values: - - gke-user-helm-default-pool-92abeacf-89nd - persistentVolumeReclaimPolicy: Delete - storageClassName: local-device - volumeMode: Filesystem - status: - phase: Bound - ``` - -:::note -A few important characteristics of a *OpenEBS Local PV* can be seen from the above output: -- `spec.nodeAffinity` specifies the Kubernetes node where the Pod using the local volume is scheduled. -- `spec.local.path` specifies the path of the block device associated with this PV. -::: - -7. *OpenEBS Dynamic Local Provisioner* would have created a BlockDeviceClaim to get a BlockDevice from NDM. The BlockDeviceClaim will be having the same name as the PV name. Look at the BlockDeviceClaim details to see which Block Device is being used. Replace the PVC Name in the below command with the PVC name that was displayed in the previous step. - ``` - kubectl get bdc -n openebs bdc-pvc-79d25095-eb1f-4028-9843-7824cb82f07f - ``` - - The output shows that the `PHASE` is `Bound`, and provides the name of the Block Device `blockdevice-d1ef1e1b9dccf224e000c6f2e908c5f2` - - ```shell hideCopy - NAME BLOCKDEVICENAME PHASE AGE - bdc-pvc-79d25095-eb1f-4028-9843-7824cb82f07f blockdevice-d1ef1e1b9dccf224e000c6f2e908c5f2 Bound 12m - ``` - -8. Look at the BlockDevice details to see where the data is stored. Replace the BDC name with the one that was displayed in the previous step. - ``` - kubectl get bd -n openebs blockdevice-d1ef1e1b9dccf224e000c6f2e908c5f2 -o yaml - ``` - The output shows that the BD is on the node `spec.nodeAttributes.nodeName: gke-user-helm-default-pool-92abeacf-89nd`. - - ```shell hideCopy - apiVersion: openebs.io/v1alpha1 - kind: BlockDevice - metadata: - name: blockdevice-d1ef1e1b9dccf224e000c6f2e908c5f2 - namespace: openebs - ... - spec: - capacity: - logicalSectorSize: 4096 - physicalSectorSize: 4096 - storage: 402653184000 - claimRef: - apiVersion: openebs.io/v1alpha1 - kind: BlockDeviceClaim - name: bdc-pvc-79d25095-eb1f-4028-9843-7824cb82f07f - namespace: openebs - uid: 8efe7480-9117-4f51-b271-84ee51a94684 - details: - compliance: SPC-4 - deviceType: disk - driveType: SSD - hardwareSectorSize: 4096 - logicalBlockSize: 4096 - model: EphemeralDisk - physicalBlockSize: 4096 - serial: local-ssd-0 - vendor: Google - devlinks: - - kind: by-id - links: - - /dev/disk/by-id/scsi-0Google_EphemeralDisk_local-ssd-0 - - /dev/disk/by-id/google-local-ssd-0 - - kind: by-path - links: - - /dev/disk/by-path/pci-0000:00:04.0-scsi-0:0:1:0 - filesystem: - fsType: ext4 - mountPoint: /mnt/disks/ssd0 - nodeAttributes: - nodeName: gke-user-helm-default-pool-92abeacf-89nd - partitioned: "No" - path: /dev/sdb - status: - claimState: Claimed - state: Active - ``` - -:::note -A few important details from the above Block Device are: -- `spec.filesystem` indicates if the BlockDevice has been formatted and the path where it has been mounted. - - If the block device is pre-formatted as in the above case, the PV will be created with path as `spec.filesystem.mountPoint`. - - If the block device is not formatted, it will be formatted with the filesystem specified in the PVC and StorageClass. Default is `ext4`. -::: - -## Cleanup - -Delete the Pod, the PersistentVolumeClaim and StorageClass that you might have created. - -``` -kubectl delete pod hello-local-device-pod -kubectl delete pvc local-device-pvc -kubectl delete sc local-device -``` - -Verify that the PV that was dynamically created is also deleted. -``` -kubectl get pv -``` - -## Backup and Restore - -OpenEBS Local Volumes can be backed up and restored along with the application using [Velero](https://velero.io). - -:::note -The following steps assume that you already have Velero with Restic integration is configured. If not, please follow the [Velero Documentation](https://velero.io/docs/) to proceed with install and setup of Velero. If you encounter any issues or have questions, talk to us on the [#openebs channel on the Kubernetes Slack server](https://kubernetes.slack.com/messages/openebs/). -::: - -### Backup - -The following steps will help you to prepare and backup the data from the volume created for the example pod (`hello-local-device-pod`), with the volume mount (`local-storage`). - -1. Prepare the application pod for backup. Velero uses Kubernetes labels to select the pods that need to be backed up. Velero uses annotation on the pods to determine which volumes need to be backed up. For the example pod launched in this guide, you can inform velero to backup by specifying the following label and annotation. - - ``` - kubectl label pod hello-local-device-pod app=test-velero-backup - kubectl annotate pod hello-local-device-pod backup.velero.io/backup-volumes=local-storage - ``` -2. Create a Backup using velero. - - ``` - velero backup create bbb-01 -l app=test-velero-backup - ``` - -3. Verify that backup is successful. - - ``` - velero backup describe bbb-01 --details - ``` - - On successful completion of the backup, the output of the backup describe command will show the following: - ```shell hideCopy - ... - Restic Backups: - Completed: - default/hello-local-device-pod: local-storage - ``` - -### Restore - -1. Install and Setup Velero, with the same provider where backups were saved. Verify that backups are accessible. - - ``` - velero backup get - ``` - - The output of should display the backups that were taken successfully. - ```shell hideCopy - NAME STATUS CREATED EXPIRES STORAGE LOCATION SELECTOR - bbb-01 Completed 2020-04-25 15:49:46 +0000 UTC 29d default app=test-velero-backup - ``` - -2. Restore the application. - - :::note - Local PVs are created with node affinity. As the node names will change when a new cluster is created, create the required PVC(s) prior to proceeding with restore. - ::: - - Replace the path to the PVC yaml in the below commands, with the PVC that you have created. - ``` - kubectl apply -f https://openebs.github.io/charts/examples/local-device/local-device-pvc.yaml - velero restore create rbb-01 --from-backup bbb-01 -l app=test-velero-backup - ``` - -3. Verify that application is restored. - - ``` - velero restore describe rbb-01 - ``` - - Depending on the data, it may take a while to initialize the volume. On successful restore, the output of the above command should show: - ```shell hideCopy - ... - Restic Restores (specify --details for more information): - Completed: 1 - ``` - -4. Verify that data has been restored. The application pod used in this example, write periodic messages (greetings) to the volume. - - ``` - kubectl exec hello-local-device-pod -- cat /mnt/store/greet.txt - ``` - - The output will show that backed up data as well as new greetings that started appearing after application pod was restored. - ```shell hideCopy - Sat Apr 25 15:41:30 UTC 2020 [hello-local-device-pod] Hello from OpenEBS Local PV. - Sat Apr 25 15:46:30 UTC 2020 [hello-local-device-pod] Hello from OpenEBS Local PV. - Sat Apr 25 16:11:25 UTC 2020 [hello-local-device-pod] Hello from OpenEBS Local PV. - ``` - -## Troubleshooting - -Review the logs of the OpenEBS Local PV provisioner. OpenEBS Dynamic Local Provisioner logs can be fetched using. - -``` -kubectl logs -n openebs -l openebs.io/component-name=openebs-localpv-provisioner -``` - -## Support - -If you encounter issues or have a question, file an [Github issue](https://github.com/openebs/openebs/issues/new), or talk to us on the [#openebs channel on the Kubernetes Slack server](https://kubernetes.slack.com/messages/openebs/). - -## See Also: - -[Understand OpenEBS Local PVs ](/concepts/localpv) [Node Disk Manager](/user-guides/ndm) diff --git a/docs/main/user-guides/mayastor.md b/docs/main/user-guides/mayastor.md deleted file mode 100644 index d6c9aa0d7..000000000 --- a/docs/main/user-guides/mayastor.md +++ /dev/null @@ -1,33 +0,0 @@ ---- -id: mayastor -title: Mayastor User Guide -keywords: - - Mayastor -description: Mayastor documentation is hosted and actively maintained at https://mayastor.gitbook.io/introduction/ ---- - -### Install and Setup - -:::warning -Mayastor is incompatible with NDM (openebs-ndm) and cStor (cstor). Installing or upgrading Mayastor with `--set mayastor.enabled=true` will either not deploy LocalPV Provisioner and NDM or will remove them (if they already exist). - -However, installing Mayastor will not affect any preexisting LocalPV volumes. -::: - -Before deploying and using Mayastor ensure that all of the [prerequisites](https://mayastor.gitbook.io/introduction/quickstart/prerequisites) are met. - -- To install Mayastor in a new cluster using OpenEBS chart, execute: - -``` -helm repo add openebs https://openebs.github.io/charts -helm repo update -helm install openebs --namespace openebs openebs/openebs --set mayastor.enabled=true --create-namespace -``` - -Once the installation is complete, move to the next step: [configuring Mayastor](https://mayastor.gitbook.io/introduction/quickstart/configure-mayastor). - - - -_For more information about Mayastor check out the [Mayastor documentation](https://mayastor.gitbook.io/introduction/)._ - - diff --git a/docs/main/user-guides/replicated-storage-user-guide/replicated-pv-mayastor/additional-information/migrate-etcd.md b/docs/main/user-guides/replicated-storage-user-guide/replicated-pv-mayastor/additional-information/migrate-etcd.md index 5dd3faf53..c404a794f 100644 --- a/docs/main/user-guides/replicated-storage-user-guide/replicated-pv-mayastor/additional-information/migrate-etcd.md +++ b/docs/main/user-guides/replicated-storage-user-guide/replicated-pv-mayastor/additional-information/migrate-etcd.md @@ -10,7 +10,7 @@ description: This section explains the Etcd Migration Procedure. By following the given steps, you can successfully migrate etcd from one node to another during maintenance activities like node drain etc., ensuring the continuity and integrity of the etcd data. :::note -Take a snapshot of the etcd. Click [here](https://etcd.io/docs/v3.5/op-guide/recovery/) for the detailed documentation. +Take a snapshot of the etcd. Refer to [Disaster Recovery documentation](https://etcd.io/docs/v3.5/op-guide/recovery/) for more information. ::: ## Step 1: Draining the etcd Node diff --git a/docs/main/user-guides/replicated-storage-user-guide/replicated-pv-mayastor/additional-information/scale-etcd.md b/docs/main/user-guides/replicated-storage-user-guide/replicated-pv-mayastor/additional-information/scale-etcd.md index 439993781..c1a88bed6 100644 --- a/docs/main/user-guides/replicated-storage-user-guide/replicated-pv-mayastor/additional-information/scale-etcd.md +++ b/docs/main/user-guides/replicated-storage-user-guide/replicated-pv-mayastor/additional-information/scale-etcd.md @@ -34,7 +34,7 @@ pool-2 worker-2 Online Online 374710730752 21793603584 35291712 ``` :::note -Take a snapshot of the etcd. Click [here](https://etcd.io/docs/v3.5/op-guide/recovery/) for the detailed documentation. +Take a snapshot of the etcd. Refer to the [Disaster Recovery documentation](https://etcd.io/docs/v3.5/op-guide/recovery/) for more details. ::: * From etcd-0/1/2, we can see that all the values are registered in the database. Once we scale up etcd with "n" replicas, all the key-value pairs should be available across all the pods. diff --git a/docs/main/user-guides/replicated-storage-user-guide/replicated-pv-mayastor/advanced-operations/snapshot.md b/docs/main/user-guides/replicated-storage-user-guide/replicated-pv-mayastor/advanced-operations/snapshot.md index 6dc58abe8..a991b6f70 100644 --- a/docs/main/user-guides/replicated-storage-user-guide/replicated-pv-mayastor/advanced-operations/snapshot.md +++ b/docs/main/user-guides/replicated-storage-user-guide/replicated-pv-mayastor/advanced-operations/snapshot.md @@ -30,7 +30,7 @@ Unlike volume replicas, snapshots cannot be rebuilt on an event of a node failur ## Prerequisites -Install and configure Replicated PV Mayastor by following the steps given in the [Installing OpenEBS documentation](../rs-installation.md) and create disk pools. +Install and configure Replicated PV Mayastor by following the steps given in the [OpenEBS Installation documentation](../rs-installation.md) and create disk pools. **Command** @@ -201,8 +201,8 @@ kubectl get volumesnapshot **Example Output** ``` -NAME READYTOUSE SOURCEPVC SOURCESNAPSHOTCONTENT RESTORESIZE SNAPSHOTCLASS SNAPSHOTCONTENT CREATIONTIME AGE -mayastor-pvc-snap true ms-volume-claim 1Gi csi-mayastor-snapshotclass snapcontent-174d9cd9-dfb2-4e53-9b56-0f3f783518df 57s 57s +NAME READYTOUSE SOURCEPVC SOURCESNAPSHOTCONTENT RESTORESIZE SNAPSHOTCLASS SNAPSHOTCONTENT CREATIONTIME AGE +mayastor-pvc-snap true ms-volume-claim 1Gi csi-mayastor-snapshotclass snapcontent-174d9cd9-dfb2-4e53-9b56-0f3f783518df 57s 57s ``` **Command** @@ -214,7 +214,7 @@ kubectl get volumesnapshotcontent **Example Output** ``` -NAME READYTOUSE RESTORESIZE DELETIONPOLICY DRIVER VOLUMESNAPSHOTCLASS VOLUMESNAPSHOT VOLUMESNAPSHOTNAMESPACE AGE +NAME READYTOUSE RESTORESIZE DELETIONPOLICY DRIVER VOLUMESNAPSHOTCLASS VOLUMESNAPSHOT VOLUMESNAPSHOTNAMESPACE AGE snapcontent-174d9cd9-dfb2-4e53-9b56-0f3f783518df true 1073741824 Delete io.openebs.csi-mayastor csi-mayastor-snapshotclass mayastor-pvc-snap default 87s ``` diff --git a/docs/main/user-guides/replicated-storage-user-guide/replicated-pv-mayastor/advanced-operations/supportability.md b/docs/main/user-guides/replicated-storage-user-guide/replicated-pv-mayastor/advanced-operations/supportability.md index 5aa3aecdb..09fbb3a4a 100644 --- a/docs/main/user-guides/replicated-storage-user-guide/replicated-pv-mayastor/advanced-operations/supportability.md +++ b/docs/main/user-guides/replicated-storage-user-guide/replicated-pv-mayastor/advanced-operations/supportability.md @@ -69,7 +69,7 @@ Supportability - collects state & log information of services and dumps it to a :::note The information collected by the supportability tool is solely used for debugging purposes. The content of these files is human-readable and can be reviewed, deleted, or redacted as necessary to adhere to the organization's data protection/privacy commitments and security policies before transmitting the bundles. -See [here](#does-the-supportability-tool-expose-sensitive-data) for more details. +Refer the section [Does the supportability tool expose sensitive data?](#does-the-supportability-tool-expose-sensitive-data) for more details. ::: The archive files generated by the dump command are stored in the specified output directories. The tables below specify the path and the content that will be stored in each archive file. diff --git a/docs/main/user-guides/replicated-storage-user-guide/replicated-pv-mayastor/rs-configuration.md b/docs/main/user-guides/replicated-storage-user-guide/replicated-pv-mayastor/rs-configuration.md index da7e970d8..ed12375d6 100644 --- a/docs/main/user-guides/replicated-storage-user-guide/replicated-pv-mayastor/rs-configuration.md +++ b/docs/main/user-guides/replicated-storage-user-guide/replicated-pv-mayastor/rs-configuration.md @@ -126,7 +126,7 @@ pool-on-node-3 node-3-14944 Created Online 10724835328 0 1072 ## Create Replicated PV Mayastor StorageClass\(s\) -Replicated PV Mayastor dynamically provisions PersistentVolumes \(PVs\) based on StorageClass definitions created by the user. Parameters of the definition are used to set the characteristics and behaviour of its associated PVs. See [storage class parameter description](#storage-class-parameters) for a detailed description of these parameters. +Replicated PV Mayastor dynamically provisions PersistentVolumes \(PVs\) based on StorageClass definitions created by the user. Parameters of the definition are used to set the characteristics and behaviour of its associated PVs. Refer [Storage Class parameters](#storage-class-parameters) for a detailed description of these parameters. Most importantly StorageClass definition is used to control the level of data protection afforded to it (i.e. the number of synchronous data replicas that are maintained for purposes of redundancy). It is possible to create any number of StorageClass definitions, spanning all permitted parameter permutations. We illustrate this quickstart guide with two examples of possible use cases; one which offers no data redundancy \(i.e. a single data replica\), and another having three data replicas. @@ -214,7 +214,7 @@ The `agents.core.capacity.thin` spec present in the Replicated PV Mayastor helm ### "allowVolumeExpansion" -The parameter `allowVolumeExpansion` enables the expansion of PVs when using Persistent Volume Claims (PVCs). You must set the `allowVolumeExpansion` parameter to `true` in the StorageClass to enable the expansion of a volume. In order to expand volumes where volume expansion is enabled, edit the size of the PVC. See the [Resize documentation](../replicated-pv-mayastor/advanced-operations/resize.md) for more details. +The parameter `allowVolumeExpansion` enables the expansion of PVs when using Persistent Volume Claims (PVCs). You must set the `allowVolumeExpansion` parameter to `true` in the StorageClass to enable the expansion of a volume. In order to expand volumes where volume expansion is enabled, edit the size of the PVC. Refer to the [Resize documentation](../replicated-pv-mayastor/advanced-operations/resize.md) for more details. ## Topology Parameters diff --git a/docs/main/user-guides/replicated-storage-user-guide/replicated-pv-mayastor/rs-deployment.md b/docs/main/user-guides/replicated-storage-user-guide/replicated-pv-mayastor/rs-deployment.md index e8a6a8bd1..4fc9dc1ce 100644 --- a/docs/main/user-guides/replicated-storage-user-guide/replicated-pv-mayastor/rs-deployment.md +++ b/docs/main/user-guides/replicated-storage-user-guide/replicated-pv-mayastor/rs-deployment.md @@ -161,7 +161,7 @@ ID REPLICAS TARGET-NODE ACCES Verify that the pod has been deployed successfully, having the status "Running". It may take a few seconds after creating the pod before it reaches that status, proceeding via the "ContainerCreating" state. :::info -Note: The example FIO pod resource declaration included with this release references a PVC named `ms-volume-claim`, consistent with the example PVC created in this section of the quickstart. If you have elected to name your PVC differently, deploy the Pod using the example YAML, modifying the `claimName` field appropriately. +The example FIO pod resource declaration included with this release references a PVC named `ms-volume-claim`, consistent with the example PVC created in this section of the quickstart. If you have elected to name your PVC differently, deploy the Pod using the example YAML, modifying the `claimName` field appropriately. ::: **Command** diff --git a/docs/main/user-guides/replicated-storage-user-guide/replicated-pv-mayastor/rs-installation.md b/docs/main/user-guides/replicated-storage-user-guide/replicated-pv-mayastor/rs-installation.md index 9761a0ba8..575d3e3de 100644 --- a/docs/main/user-guides/replicated-storage-user-guide/replicated-pv-mayastor/rs-installation.md +++ b/docs/main/user-guides/replicated-storage-user-guide/replicated-pv-mayastor/rs-installation.md @@ -291,7 +291,7 @@ If you set `csi.node.topology.nodeSelector: true`, then you will need to label t ## Installation -For installation instructions, see [here](../../../quickstart-guide/installation.md). +Refer to the [OpenEBS Installation documentation](../../../quickstart-guide/installation.md) to install Replicated PV Mayastor. ## Support diff --git a/docs/main/user-guides/uninstallation.md b/docs/main/user-guides/uninstallation.md index e31e57aa8..9bd6c818b 100644 --- a/docs/main/user-guides/uninstallation.md +++ b/docs/main/user-guides/uninstallation.md @@ -1,9 +1,11 @@ --- id: uninstall -title: Uninstalling OpenEBS +title: OpenEBS Uninstallation keywords: + - OpenEBS Uninstallation - Uninstalling OpenEBS - Uninstall OpenEBS + - Uninstallation description: This section is to describe about the graceful deletion/uninstallation of your OpenEBS cluster. --- diff --git a/docs/main/user-guides/upgrades.md b/docs/main/user-guides/upgrades.md index d2e715c93..3a3e6eaeb 100644 --- a/docs/main/user-guides/upgrades.md +++ b/docs/main/user-guides/upgrades.md @@ -16,7 +16,7 @@ Upgrade from OpenEBS 3.x to OpenEBS 4.1.0 is only supported for the below storag - Local PV ZFS - Replicated PV Mayastor -See the [migration documentation](../user-guides/data-migration/migration-overview.md) for other storages. +Refer to the [Migration documentation](../user-guides/data-migration/migration-overview.md) for other storages. ::: ## Overview From 41a65a27f9026efc5f0ab8e4fefc1e183a31e8bf Mon Sep 17 00:00:00 2001 From: Bala Harish <161304963+balaharish7@users.noreply.github.com> Date: Wed, 17 Jul 2024 11:00:51 +0530 Subject: [PATCH 2/4] docs: created a new document for observability Signed-off-by: Bala Harish <161304963+balaharish7@users.noreply.github.com> --- docs/i18n/en/code.json | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/docs/i18n/en/code.json b/docs/i18n/en/code.json index a35dad35b..aa62badae 100644 --- a/docs/i18n/en/code.json +++ b/docs/i18n/en/code.json @@ -193,15 +193,15 @@ }, "theme.docs.versions.unreleasedVersionLabel": { "message": "This is the documentation for the development version of OpenEBS.", - "description": "The label used to tell the user that they are browsing an unreleased doc version" + "description": "The label used to tell the user that he's browsing an unreleased doc version" }, "theme.docs.versions.unmaintainedVersionLabel": { "message": "This is the documentation for {siteTitle} {versionLabel}, which is no longer actively maintained.", - "description": "The label used to tell the user that they are browsing an unmaintained doc version" + "description": "The label used to tell the user that he's browsing an unmaintained doc version" }, "theme.docs.versions.latestVersionSuggestionLabel": { "message": "See the {latestVersionLink} ({versionLabel}) to view the latest documentation.", - "description": "The label used to tell the user that they are browsing an unmaintained doc version" + "description": "The label userd to tell the user that he's browsing an unmaintained doc version" }, "theme.docs.versions.latestVersionLinkLabel": { "message": "latest version", From 9ddc185137041e5fbe027a29ac879d04fcfb8b89 Mon Sep 17 00:00:00 2001 From: Bala Harish <161304963+balaharish7@users.noreply.github.com> Date: Thu, 1 Aug 2024 10:33:58 +0530 Subject: [PATCH 3/4] docs: modifed the docs as per the comments & added GCP to the Glossary Signed-off-by: Bala Harish <161304963+balaharish7@users.noreply.github.com> --- docs/main/glossary.md | 1 + docs/main/quickstart-guide/deploy-a-test-application.md | 2 +- docs/main/releases.md | 2 +- .../data-migration/migration-using-velero/overview.md | 6 +++--- .../local-pv-zfs/zfs-configuration.md | 2 +- .../local-pv-zfs/zfs-installation.md | 2 +- 6 files changed, 8 insertions(+), 7 deletions(-) diff --git a/docs/main/glossary.md b/docs/main/glossary.md index aeabf74e7..bd74cf6b7 100644 --- a/docs/main/glossary.md +++ b/docs/main/glossary.md @@ -21,6 +21,7 @@ description: This section lists the abbreviations used thorughout the OpenEBS do | EKS | Elastic Kubernetes Service | | FIO | Flexible IO Tester | | FSB | File System Backup | +| GCP | Google Cloud Platform | | GCS | Google Cloud Storage | | GKE | Google Kubernetes Engine | | HA | High Availability | diff --git a/docs/main/quickstart-guide/deploy-a-test-application.md b/docs/main/quickstart-guide/deploy-a-test-application.md index 53a7f61ed..24bcdef2d 100644 --- a/docs/main/quickstart-guide/deploy-a-test-application.md +++ b/docs/main/quickstart-guide/deploy-a-test-application.md @@ -84,7 +84,7 @@ The next step is to create a PersistentVolumeClaim. Pods will use PersistentVolu ``` :::note - As the Local PV storage classes use `waitForFirstConsumer`, do not use `nodeName` in the Pod spec to specify node affinity. If `nodeName` is used in the Pod spec, then PVC will remain in `pending` state. Refer the issue [#2915](https://github.com/openebs/openebs/issues/2915) for more details. + As the Local PV storage classes use `waitForFirstConsumer`, do not use `nodeName` in the Pod spec to specify node affinity. If `nodeName` is used in the Pod spec, then PVC will remain in `pending` state. Refer to the issue [#2915](https://github.com/openebs/openebs/issues/2915) for more details. ::: 2. Create the Pod: diff --git a/docs/main/releases.md b/docs/main/releases.md index 6199b33a9..f2c5ee32d 100644 --- a/docs/main/releases.md +++ b/docs/main/releases.md @@ -81,7 +81,7 @@ Earlier, the scale of volume was not allowed when the volume already has a snaps ### Watch Items and Known Issues - Local Storage Local PV ZFS / Local PV LVM on a single worker node encounters issues after upgrading to the latest versions. The issue is specifically associated with the change of the controller manifest to a Deployment type, which results in the failure of new controller pods to join the Running state. The issue appears to be due to the affinity rules set in the old pod, which are not present in the new pods. As a result, since both the old and new pods have relevant labels, the scheduler cannot place the new pod on the same node, leading to scheduling failures when there's only a single node. -The workaround is to delete the old pod so the new pod can get scheduled. Refer the issue [#3741](https://github.com/openebs/openebs/issues/3751) for more details. +The workaround is to delete the old pod so the new pod can get scheduled. Refer to the issue [#3741](https://github.com/openebs/openebs/issues/3751) for more details. ### Watch Items and Known Issues - Replicated Storage diff --git a/docs/main/user-guides/data-migration/migration-using-velero/overview.md b/docs/main/user-guides/data-migration/migration-using-velero/overview.md index 76046d3a8..c98723c15 100644 --- a/docs/main/user-guides/data-migration/migration-using-velero/overview.md +++ b/docs/main/user-guides/data-migration/migration-using-velero/overview.md @@ -13,9 +13,9 @@ This documentation outlines the process of migrating application volumes from CS **Velero Support**: Velero supports the backup and restoration of Kubernetes volumes attached to pods through File System Backup (FSB) or Pod Volume Backup. This process involves using modules from popular open-source backup tools like Restic (which we will utilize). - For **cloud provider plugins**, see the [Velero Docs - Providers section](https://velero.io/docs/main/supported-providers/). -- **Velero GKE Configuration (Prerequisites)**: You can find the prerequisites and configuration details for Velero in a Google Kubernetes Engine (GKE) environment on the GitHub [here](https://github.com/vmware-tanzu/velero-plugin-for-gcp#setup). -- **Object Storage Requirement**: To store backups, Velero necessitates an object storage bucket. In our case, we utilize a Google Cloud Storage (GCS) bucket. Configuration details and setup can be found on the GitHub [here](https://github.com/vmware-tanzu/velero-plugin-for-gcp#setup). -- **Velero Basic Installation**: For a step-by-step guide on the basic installation of Velero, see the [Velero Docs - Basic Install section](https://velero.io/docs/v1.11/basic-install/). +- **Velero GKE Configuration (Prerequisites)**: Refer [Velero plugin for Google Cloud Platform (GCP)](https://github.com/vmware-tanzu/velero-plugin-for-gcp#setup) to view the prerequisites and configuration details for Velero in a Google Kubernetes Engine (GKE) environment. +- **Object Storage Requirement**: To store backups, Velero necessitates an object storage bucket. In our case, we utilize a Google Cloud Storage (GCS) bucket. Refer [Velero plugin for GCP](https://github.com/vmware-tanzu/velero-plugin-for-gcp#setup) to view the setup and configuration details. +- **Velero Basic Installation**: Refer to the [Velero Documentation - Basic Install section](https://velero.io/docs/v1.11/basic-install/) for a step-by-step guide on the basic installation of Velero. ## See Also diff --git a/docs/main/user-guides/local-storage-user-guide/local-pv-zfs/zfs-configuration.md b/docs/main/user-guides/local-storage-user-guide/local-pv-zfs/zfs-configuration.md index 10fca8467..b5b19a356 100644 --- a/docs/main/user-guides/local-storage-user-guide/local-pv-zfs/zfs-configuration.md +++ b/docs/main/user-guides/local-storage-user-guide/local-pv-zfs/zfs-configuration.md @@ -125,7 +125,7 @@ The provisioner name for ZFS driver is "zfs.csi.openebs.io", we have to use this **Scheduler** -The ZFS driver has its own scheduler which will try to distribute the PV across the nodes so that one node should not be loaded with all the volumes. Currently the driver supports two scheduling algorithms: VolumeWeighted and CapacityWeighted, in which it will try to find a ZFS pool which has less number of volumes provisioned in it or less capacity of volume provisioned out of a pool respectively, from all the nodes where the ZFS pools are available. Refer [StorageClass With k8s Scheduler](https://github.com/openebs/zfs-localpv/blob/HEAD/docs/storageclasses.md#storageclass-with-k8s-scheduler) to know about how to select scheduler via storage-class. +The ZFS driver has its own scheduler which will try to distribute the PV across the nodes so that one node should not be loaded with all the volumes. Currently the driver supports two scheduling algorithms: VolumeWeighted and CapacityWeighted, in which it will try to find a ZFS pool which has less number of volumes provisioned in it or less capacity of volume provisioned out of a pool respectively, from all the nodes where the ZFS pools are available. Refer [StorageClass With K8s Scheduler](https://github.com/openebs/zfs-localpv/blob/HEAD/docs/storageclasses.md#storageclass-with-k8s-scheduler) to learn how to select a scheduler via storage class. Once it can find the node, it will create a PV for that node and also create a ZFSVolume custom resource for the volume with the NODE information. The watcher for this ZFSVolume CR will get all the information for this object and creates a ZFS dataset (zvol) with the given ZFS property on the mentioned node. diff --git a/docs/main/user-guides/local-storage-user-guide/local-pv-zfs/zfs-installation.md b/docs/main/user-guides/local-storage-user-guide/local-pv-zfs/zfs-installation.md index 5e18e2c30..3b2ad3e9c 100644 --- a/docs/main/user-guides/local-storage-user-guide/local-pv-zfs/zfs-installation.md +++ b/docs/main/user-guides/local-storage-user-guide/local-pv-zfs/zfs-installation.md @@ -34,7 +34,7 @@ If you have the disk (say /dev/sdb), then you can use the below command to creat $ zpool create zfspv-pool /dev/sdb ``` -You can also create mirror or raidz pool as per your need. Refer [Local PV ZFS](https://github.com/openzfs/zfs) for more information. +You can also create mirror or raidz pool as per your need. Refer to the [OpenZFS Documentation](https://openzfs.github.io/openzfs-docs/) for more details. If you do not have the disk, then you can create the zpool on the loopback device which is backed by a sparse file. Use this for testing purpose only. From 69b69d57b4490d9ddca3adaff492e4d74697a3b3 Mon Sep 17 00:00:00 2001 From: Bala Harish <161304963+balaharish7@users.noreply.github.com> Date: Thu, 1 Aug 2024 14:08:13 +0530 Subject: [PATCH 4/4] docs: modifed the docs as per the comments Signed-off-by: Bala Harish <161304963+balaharish7@users.noreply.github.com> --- .../data-migration/migration-using-velero/overview.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/main/user-guides/data-migration/migration-using-velero/overview.md b/docs/main/user-guides/data-migration/migration-using-velero/overview.md index c98723c15..74c506712 100644 --- a/docs/main/user-guides/data-migration/migration-using-velero/overview.md +++ b/docs/main/user-guides/data-migration/migration-using-velero/overview.md @@ -14,7 +14,7 @@ This documentation outlines the process of migrating application volumes from CS - For **cloud provider plugins**, see the [Velero Docs - Providers section](https://velero.io/docs/main/supported-providers/). - **Velero GKE Configuration (Prerequisites)**: Refer [Velero plugin for Google Cloud Platform (GCP)](https://github.com/vmware-tanzu/velero-plugin-for-gcp#setup) to view the prerequisites and configuration details for Velero in a Google Kubernetes Engine (GKE) environment. -- **Object Storage Requirement**: To store backups, Velero necessitates an object storage bucket. In our case, we utilize a Google Cloud Storage (GCS) bucket. Refer [Velero plugin for GCP](https://github.com/vmware-tanzu/velero-plugin-for-gcp#setup) to view the setup and configuration details. +- **Object Storage Requirement**: Velero necessitates an object storage bucket to store backups. In this case, we are using a Google Cloud Storage (GCS) bucket. Refer [Velero plugin for GCP](https://github.com/vmware-tanzu/velero-plugin-for-gcp#setup) to view the setup and configuration details. - **Velero Basic Installation**: Refer to the [Velero Documentation - Basic Install section](https://velero.io/docs/v1.11/basic-install/) for a step-by-step guide on the basic installation of Velero. ## See Also