Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Poc elastic training #1310

Draft
wants to merge 19 commits into
base: main
Choose a base branch
from
Draft

Poc elastic training #1310

wants to merge 19 commits into from

Conversation

lukebaumann
Copy link
Collaborator

Description

Start with a short description of what the PR does and how this is a change from
the past.

The rest of the description includes relevant details and context, examples:

  • why is this change being made,
  • the problem being solved and any relevant context,
  • why this is a good solution,
  • some information about the specific implementation,
  • shortcomings of the solution and possible future improvements.

If the change fixes a bug or a Github issue, please include a link, e.g.,:
FIXES: b/123456
FIXES: #123456

Tests

Please describe how you tested this change, and include any instructions and/or
commands to reproduce.

Checklist

Before submitting this PR, please make sure (put X in square brackets):

  • I have performed a self-review of my code.
  • I have necessary comments in my code, particularly in hard-to-understand areas.
  • I have run end-to-end tests tests and provided workload links above if applicable.
  • I have made or will make corresponding changes to the doc if needed.

Copy link
Collaborator

@gagika gagika left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A few initial comments

MaxText/train.py Outdated
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you move it to a separate elastic_train.py file similar to sft_trainer.py: https://github.com/AI-Hypercomputer/maxtext/blob/sft-maxtext/MaxText/sft_trainer.py

MaxText/train.py Outdated
@elasticutils.timeit
def reshard_fn(config: pyconfig.HyperParameters):
"""Reshard function."""
while True:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you add num of reshards instead of infinite loop?

@lukebaumann lukebaumann force-pushed the poc-elastic-training branch from 0c975ce to 28044f7 Compare March 1, 2025 04:01
Adding ElasticUtils to config
Added elasticutils and gkeutils
Added a watchdog/timebomb to each step
Completely working test run. Further test runs and optimizations to follow
Fixed a bug if DATA_LOSS occurs during a save
added timeit to reshard_fn
Updated the watchdog to repeatably stack trace every timeout intervals.
send a fatal log if the failures > max failures in slice_down() instead of in the training loop in order to fail correctly if there is a reshard/failure loop within the reshard handler
Updated elasticutils and added a fake elasticutils
Working host memory offloading.
Added a max reshard retry count
Updated elasticutils to how it will be structured in pathwaysutils
Delete checkpoint if we are rewinding behind when it was saved. Fixed a bug in put_array. Added max_reshard_failure_count. Added save nbytes log
Adding elastic trainer
@lukebaumann lukebaumann force-pushed the poc-elastic-training branch from 28044f7 to 9fdab26 Compare March 1, 2025 06:22
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants