-
Notifications
You must be signed in to change notification settings - Fork 63
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Memory continously increasing after 6 workspaces #180
Comments
Thanks for your report! What kind of terraform providers are involved in the workspace operation? is it terraform-provider-aws only? could you please share the version? |
High-level, without actually reproducing it, hashicorp/terraform-provider-aws#31722 looks related. |
We have seen this problem with terraform-provider-aws >= 4.67.0 - we had to pin the provider to < 4.67.0 in order to prevent the memory issues. |
Unfortunately no - kubernetes controllers can only work as single processes, there is no way to share a single kubernetes resource type across multiple controller instances. Both instances would be receiving and processing events and there is no way to ensure that a single resource is only handled by a single controller. Have you identified what is using the memory? The provider cache will usually solve most memory usage issues, except for the known problem with the latest AWS provider using excessive amounts of memory. |
I would exec into the pod and run "du -s /tf/*" to see what is using all of the memory (which equates to disk usage). For example:
shows the bulk of the usage is in the provider cache, which I would expect, and:
shows that there are 6 versions of the AWS provider cached which is using all of the space. If we pinned the AWS provider to a specific version it would use a lot less space. |
What happened?
We have noticed a constant increase of pod memory of terraform provider.
The pod memory has touched nearly 1.38 GiB
How can we reproduce it?
We are running a terraform provider to create EKS cluster across accounts. So currently we have 12 workspaces
What environment did it happen in?
The text was updated successfully, but these errors were encountered: