-
Notifications
You must be signed in to change notification settings - Fork 594
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Kong-ingress-controller 3.4 has high CPU usage when running 2 pods #6907
Comments
+1, i have the exact same issue here. Some of my kong ingress controller replica are hogging 2 vCPU each (2 out of 5). Running kong helm chart 2.46.0 |
We have found this message in the logs:
|
I believe that the Kong kubernetes-ingress-controller 3.4 and 3.4.0 images have been compromised in Kongs Docker hub account and that the image has been replaced with a similar image with a crypto miner injected which would explain the high CPU. This image was introduced in the helm chart 2.46.0 and also the ingress-0.17.0 chart. Version 3.4 of the kong/kubernetes-ingress-controller image was released on the 18th December 2024 but the docker hub image was last updated on the 24th December 2024 |
KIC image 3.4.0 contained unauthorized code. We have released 3.4.1 which removes the unauthorized code. We also removed the tags 3.4.0 and updated latest and 3.4 to point to 3.4.1. For added security you can use the digest for the release: http://docker.io/kong/kubernetes-ingress-controller:3.4.1@sha256:45da0da02c395bfdb6a324370b87eca39098bad42b184b57d56a44d5d95da99e For arm: sha256:e0125aa85a4c9eef7822ba5234e90958c71e1d29474d6247adc3e7e21327e8ee Our investigation is continuing. At this point we believe the unauthorized actor exploited a misconfiguration in the KIC public repository build pipeline. We have rotated all keys and taken other measures to help ensure image integrity. |
@lahabana If helm charts use moving tag that point to minor version it would make sense to set by default ImagePullPolicy to Always, so in case moving tag (in this case 3.4) is updated kubernetes pulls latest image automatically. |
I agree with @camaeel suggestion here, pinning to a sha and setting the ImagePullPolicy would be a wise and more secure method. |
Time is of the essence in these scenarios.
EDIT: Kong advisory went out a few minutes before I hit the comment button. Thank you for the transparency! |
A github security advisory was just issued advising the community to upgrade to 3.4.1. |
We completely understand the desire for transparency. Please know we are taking this incident extremely seriously. In addition to the advisory noted above, we've reached out to known affected customers and engaged an outside security research firm to analyze the affected image. At this time we have identified a cryptominer as noted in the advisory, attempting to connect to pool.supportxmr.com, but have not identified any other malicious payload. Should that change we will update this thread and the advisory. Just as importantly, we’re working through a full root cause analysis to determine how the attack occurred, and to help ensure that it cannot recur. We’ve made good progress on this front, and once our investigation has concluded we intend to publish our findings. |
@mrwanny from the same container, there was also a DNS call to |
With the assistance of a third party we have completed our review of the unauthorized KIC 3.4.0 image, and have confirmed that the XMRig Miner was the sole unauthorized malicious code, and that there is no evidence of any other malicious code. |
We just posted a blog post with additional details. |
Is there an existing issue for this?
Current Behavior
Recently I upgraded "ingress" helm chart (from) from version v0.16.0 to v0.17.0. This included upgrade of kong-ingress-controller from 3.3 to 3.4.
After this was upgraded one of 2 replicas of kong-ingress-controller started to consume 2 CPU full cores.
When I scaled to 1 replica, the only replica had low CPU usage.
Nothing interesting/repeated in the logs.
Since this is home-lab setup it has almost no traffic, so this was not caused by user traffic.
Expected Behavior
Both pods should have low CPU usage
Steps To Reproduce
On kind cluster I had bad performance even with one pod.
Kong Ingress Controller version
Kubernetes version
Anything else?
No response
The text was updated successfully, but these errors were encountered: