-
Is it possible to use strimzi-drain-cleaner with a StorageClass that locks PVs/PVCs to specific nodes? We're running Kubernetes on bare metal and are using OpenEBS LVM LocalPV storage provider (https://github.com/openebs/lvm-localpv, which is basically just hostpath storage with extra steps) to squeeze the most performance out of our drives. When drain-cleaner marks the pod with the annotation, the operator kills that pod, and then because the node is cordoned the new pod is stuck in Pending. This is quite fine from my perspective as then the pod at least got a clean shutdown. But this still prevents the drain from succeeding. We have 3 kafka brokers and 3 zookeepers. When we drain, only 1 zookeeper or only 1 broker gets terminated. But then I guess the cluster is not restored yet so the Strimzi cluster operator won't terminate the other pod. Is there a configuration where I can tell Strimzi that having 1 node down is OK, and that it should continue with the manual restarts? And then also, is it possible to tell drain-cleaner to not invalidate the webhook requests when the pods are just stuck in pending? |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 10 replies
-
It should work as long as you have only one Kafka broker (from the same Kafka cluster) on given node and as long as you drain the nodes one by one. |
Beta Was this translation helpful? Give feedback.
I guess you could move to KRaft to get rid of the ZooKeeper or you can just disable the Drain Cleaner for ZooKeeper and have Kubernetes evict it based on PDB with maxUnavailable 1. The ZooKeper sync the data differently, so there should not be an issue with that.