-
Notifications
You must be signed in to change notification settings - Fork 2.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
To obtain the "Allocated resources:" from the output of kubectl describe node <node-name>, what query should be used? #2522
Comments
This issue is currently awaiting triage. If kube-state-metrics contributors determine this is a relevant issue, they will accept it by applying the The Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten |
Hi @githubeto, we meet same issure. Since the metric IMO, we should add new metric named |
The other metrics It seems that we should import other struct to expose |
I believe that the most accurate value for determining whether a Pod can be scheduled is the Requests section under
"Allocated resources:" obtained from kubectl describe node .
This is because if these values are close to 100% (for example, 99%), attempting to schedule a Pod will result in a resource shortage, making it impossible to schedule.
So, how can we obtain this value using a Prometheus query in Grafana?
The reason I am asking this question is that while the query
kube_node_status_allocatable{resource="memory"}
can be used to obtain the total capacity of the node, the sum of the memory requests of currently scheduled Pods obtained with
sum by (node) (kube_pod_container_resource_requests{resource="memory"})
clearly exceeds the node's capacity. Each of my nodes has 16Gi of memory, but this query returns a total that exceeds 16Gi.
sum by (node) (kube_pod_container_resource_requests{resource="memory"})
18.2Gi ...? (Worker node is 16Gi memory spec.)
The text was updated successfully, but these errors were encountered: