You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
To prevent these kind of breakages, the standard practice is to use a roller:
We pin these docker images at some date
A bot (roller) makes a pull request to update the pin every day
The pull request is automatically merged if all the checks pass
Because this roller is only run nightly, we may afford to run more extensive regression tests, such as testing a full pod Llama 3.1 405B performance or even on multiple pods. The idea is that we'll run these tests overnight, and by the next morning, we can see if there were any regressions on the PR. Again, if a new nightly build introduces regressions, then that PR can't be merged. An engineer will inspect the profiles to track down the source of the regression in torch_xla.
The text was updated successfully, but these errors were encountered:
Currently CI and E2E tests uses the nightly build of PyTorch and PyTorch/XLA: https://github.com/AI-Hypercomputer/torchprime/blob/main/.github/workflows/cpu_test.yml, https://github.com/AI-Hypercomputer/torchprime/blob/main/torchprime/launcher/Dockerfile#L4. This means checks may just stop working at any point if there is an upstream introduced regression.
To prevent these kind of breakages, the standard practice is to use a roller:
Because this roller is only run nightly, we may afford to run more extensive regression tests, such as testing a full pod Llama 3.1 405B performance or even on multiple pods. The idea is that we'll run these tests overnight, and by the next morning, we can see if there were any regressions on the PR. Again, if a new nightly build introduces regressions, then that PR can't be merged. An engineer will inspect the profiles to track down the source of the regression in torch_xla.
The text was updated successfully, but these errors were encountered: