Replies: 1 comment
-
To avoid modifying existing
(or some other naming convention, like But this is only as a suggestion, I'm not sure what's better. In the past whenever I had workflows where I could use multiple types of workers, I ended up splitting the code into separate parts... maybe these functions will only be needed in very special cases, and then there is some benefit in keeping them as separate functions. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hello! I'm working to extend the SpecCluster class to support multiple worker types and was hoping to see if you all had any thoughts on how best to do this. Currently, what I'm doing is I created a concept of 'worker types'. I consider the worker_spec fed in through the worker param to be the spec for a 'default' worker. I then have the cluster store a dict of alternate_worker_args (worker_type -> worker_spec). I also modify the scale() and adapt() functionality and add a scale_worker() method.
Here's what I was thinking so far:
Do you think this would work well for users or do you think this might create some new pain points? Any ideas for how this could be better designed?
Beta Was this translation helpful? Give feedback.
All reactions