You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is your feature request related to a problem? Please describe.
One of the common use cases for running a container orchestrator is the possibility to increase the availability of workloads by spreading replicas over several nodes. However, by default, the Kubernetes scheduler does not take distribution of risk into account. So it can happen that all replicas of a Deployment or StatefulSet run on the same node. A node failure in this case would lead to complete unavailability of the workload/service.
Users can achieve distribution across nodes by setting appropriate anti-affinity settings for pods
Describe the solution you'd like
Popeye should issue a warning if there is a StatefulSet or Deployment with all replicas on the same node. The warning text should mention configuring anti-affinity as a way to mitigate. Example:
All pods on the same node. Consider setting anti-affinity.
Describe alternatives you've considered
The solution above looks at the symptom and may or may not catch the problem, depending on the distribution of pods in the moment of checking.
Parsing the pods's .spec.affinity and trying to deduct whether it would lead to distribution over nodes would solve this in theory. However, that appears way too complex.
Additional context
n/a
The text was updated successfully, but these errors were encountered:
Is your feature request related to a problem? Please describe.
One of the common use cases for running a container orchestrator is the possibility to increase the availability of workloads by spreading replicas over several nodes. However, by default, the Kubernetes scheduler does not take distribution of risk into account. So it can happen that all replicas of a Deployment or StatefulSet run on the same node. A node failure in this case would lead to complete unavailability of the workload/service.
Users can achieve distribution across nodes by setting appropriate anti-affinity settings for pods
Describe the solution you'd like
Popeye should issue a warning if there is a StatefulSet or Deployment with all replicas on the same node. The warning text should mention configuring anti-affinity as a way to mitigate. Example:
Describe alternatives you've considered
The solution above looks at the symptom and may or may not catch the problem, depending on the distribution of pods in the moment of checking.
Parsing the pods's
.spec.affinity
and trying to deduct whether it would lead to distribution over nodes would solve this in theory. However, that appears way too complex.Additional context
n/a
The text was updated successfully, but these errors were encountered: