Skip to content

Files

Latest commit

ea5f9f7 · May 14, 2024

History

History
71 lines (45 loc) · 2.97 KB

README.md

File metadata and controls

71 lines (45 loc) · 2.97 KB

Acknowledgements:

This repo was cloned from Jason Kincl's excellent "Lustre on OpenShift" GitHub repo

Graid Technology's SupremeRAID for OpenShift

GOAL: Enable Graid Technology's SupremeRAID high performance storage for OpenShift workloads

Tasks:

  • Install any NVIDIA dependencies (NVIDIA Operator?)
  • Build the kernel modules using the Kubernetes Kernel Module Management operator and OpenShift's Driver Toolkit image

Building SupremeRAID kernel modules for Red Hat CoreOS

Red Hat CoreOS is based on RHEL but uses an extended update kernel and in order to make it easier to build kernel modules we have the Driver Toolkit available which contains all of the kernel development headers for a particular release of OpenShift.

In the past we developed an operator to help manage specialized resources on OpenShift (Special Resource Operator) but we are migrating to a collaborative effort upstream to manage kernel modules for Kubernetes called the Kernel Module Management operator.

Deploying the Kernel Module Management operator

Kernel Module Management operator (pulling from midstream until deployed into catalogs) installation documentation

$ oc apply -k https://github.com/rh-ecosystem-edge/kernel-module-management/config/default

Create Module Custom Resources

In the root of this git repository we are using kustomize which will deploy our Module custom resources (in kmm.yaml)

We are lazily labeling all nodes with feature.kmm.lustre=true to enable the KMM operator but we really only need the worker nodes.

$ git clone ...

$ oc new-project graid
$ oc apply -k .
$ oc get nodes -o name | xargs -I{} oc label {} feature.kmm.graid=true

Building the Lustre kernel module container image

The KMM operator will kick off a OpenShift Build using the Dockerfile in this repository. Currently this build pulls the source RPMs from the AWS FSx RPM repositories and rebuilds them for the Red Hat CoreOS kernel. Once the build completes it will create DaemonSets to insert the kernel modules on the nodes with the feature.kmm.graid=true label.

Creating GRAID storage

TODO: Create PD, VG, VDs...

Seeing the GRAID storage

TODO: lsblk

Consume the storage using LVMStorage

TODO: Is LVMStorage the right tool? What about RWX volumes?

Testing

TODO: Create a VM - observe StorageClass bindingMode: WaitForFirstConsumer