-
Notifications
You must be signed in to change notification settings - Fork 234
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Adding sidecars to nodepools for any additional functionality #928
base: main
Are you sure you want to change the base?
Conversation
Signed-off-by: M Samuel Vijaykumar <[email protected]>
Signed-off-by: M Samuel Vijaykumar <[email protected]>
Signed-off-by: M Samuel Vijaykumar <[email protected]>
Thanks @samof76, adding @swoehrl-mw to please take a look and review. |
Thanks for your contribution @samof76. I understood the motivation of this PR to have a generic sidecars, coming to the root problem of node-exporter, even though node-exporter when installed as daemon set which can query the OpenSearch pods via the cluster DNS there is some limitation to get the Pod-level (pod specific) metrics, just curious there should be some solution for this supported out of the box by prometheus, do we need to only run as a side car to solve this? I was reading some examples and saw https://github.com/lightstep/opentelemetry-prometheus-sidecar, https://docs.lightstep.com/docs/replace-prometheus-with-an-otel-collector-on-kubernetes, https://signoz.io/guides/how-to-monitor-custom-kubernetes-pod-metrics-using-prometheus/. I assume it should be possible to scrape the pod specific metrics. |
I am not sure this is how node exporter should be used. It's called node exporter for a reason. We'd rather have the application export proper metrics either directly or via the prometheus exporter. |
The ability to configure sidecars is one I can support. But I agree with @eyenx, node-exporter does not belong in the sidecar. Opensearch-specific metrics can be collected via the specific exporter that can be configured via the monitoring feature, other metrics should come from Kubernetes or node-level exporters. @samof76 |
Since the pods run in the isolated network namespaces with all the container in that pod, it not easy to get those metrics from the node-exporter running as a daemonset on the node. So we run a node-exporter in all out high throughput pods to collect those tcp metrics with something like this...
And it is the only way currently to get the pod's network namespace metrics. So it is critical for you to support this pattern as it will help in monitoring opensearch more efficiently. |
I disagree, we use the kubelet metrics to gather metrics like
This metrics are grabbed via cAdvisor from kubelet: https://www.cloudforecast.io/blog/cadvisor-and-kubernetes-monitoring-guide/ |
Description
Adding sidecars functionality to nodepools
Issues Resolved
No issue was created but i will try to explain why we would need sidecars. Since containers run their own network namespace, that network cannot be monitored from node-exporter that is running on the node as daemonset. So to monitor the network namespace it critical to run the node-exporter as a sidecar.
Especially in systems where there is huge network through put, it is very critical to run it.
Check List
make lint
)If CRDs are changed:
make manifests
) and also copied into the helm chartPlease refer to the PR guidelines before submitting this pull request.
By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.
For more information on following Developer Certificate of Origin and signing off your commits, please check here.