-
Notifications
You must be signed in to change notification settings - Fork 10
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Multiple worker groups [KIM/feature] #46
Comments
This issue or PR has been automatically marked as stale due to the lack of recent activity. This bot triages issues and PRs according to the following rules:
You can:
If you think that I work incorrectly, kindly raise an issue with the problem. /lifecycle stale |
This issue or PR has been automatically closed due to the lack of activity. This bot triages issues and PRs according to the following rules:
You can:
If you think that I work incorrectly, kindly raise an issue with the problem. /close |
@kyma-bot: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
@pbochynski : QQ - is this feature still relevant? If yes, I will start the alignment with KEB guys as it needs also their involvment. |
The issue is part of bigger Epic: kyma-project/kyma#18195 |
JFYI: It's important to set
to ensure the pool-nodes gets a label which indicates the related worker pool. This is important for later scheduling rules (via affinity configurations etc.) It worked also without |
To cover billing requirements, we have to extend the contract of the |
Today we aligned the next steps for rolling out multiple worker pools, and it makes sense to distinguish in the It's meaningful to reflect this differentiation also in the Proposal:
|
Requires #396 |
From KIM side, we would expect following structure from KEB:
|
Proposed following logic for Provider Config create/update with mutliple workers: On shoot create:
On shoot update:
|
Final agreement with @kyma-project/gopher regarding worker pool sizing (meeting minute from 2025-01-23):
The current implementation is not allowing to modify worker-pool zones (< they decide whether a pool is running in HA/non-HA mode). This change will be introduced after the first testing and stabilisation cycle is completed on KCP DEV. |
Description
Enable possibility to create multiple worker groups with different machine types, volume types, node labels, annotations, taints.
See Gardener specs:
Current example shoot from Provisioner:
AC:
workers
inRuntimeCR
represents the Kyma worker pool. It is always positioned as FIRST element in the workers-array in Shoot-Spec.additonalWorkers
inRuntimeCR
represents the customer worker pool(s). It is always appended to theworkers
field in Shoot-Spec.cpu-worker-1
.Reasons
One size doesn't fit all. Many applications require specific nodes for particular services.
Relates to
The text was updated successfully, but these errors were encountered: