You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
It seems that (with a new enough version of PyTorch at least) Cheetah works with MPS now (related to #61). Should we (a) close #61 and (b) change to tick point on the PR template that requests running the tests on CUDA to also allow MPS.
This would basically mean that every Mac user is running tests on a GPU device by default. I think the main reason to run tests on GPU is to make sure that device moving and matching works.
The text was updated successfully, but these errors were encountered:
Sure. We should add a CI/CD for m1 device and close #61.
Another question though: should we also allow this to run for also all python versions & push and PRs, at some points the CI/CD pipelines are just too excessive
I think a "GPU" run on the newest version of Python is enough, at least until we run into version-specific issues. I would also like to remove Python 3.9 at some point (see #205), which would reduce the number of jobs. That's currently blocked by the DOOCS bindings only being available for 3.9 though.
It seems that (with a new enough version of PyTorch at least) Cheetah works with MPS now (related to #61). Should we (a) close #61 and (b) change to tick point on the PR template that requests running the tests on CUDA to also allow MPS.
This would basically mean that every Mac user is running tests on a GPU device by default. I think the main reason to run tests on GPU is to make sure that
device
moving and matching works.The text was updated successfully, but these errors were encountered: