Skip to content

Automatically Discovering Fast Parallelization Strategies for Distributed Deep Neural Network Training

License

Notifications You must be signed in to change notification settings

flexflow/flexflow-train

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

FlexFlow Train

clang-format Check per-lib-checks shell-check Documentation Status

Warning

The FlexFlow repository has been split into separate flexflow-train and flexflow-serve repositories. You are currently viewing flexflow-train. For anything inference/serving-related, go to flexflow-serve.

FlexFlow Train is a deep learning framework that accelerates distributed DNN training by automatically searching for efficient parallelization strategies.

Contributing

Please let us know if you encounter any bugs or have any suggestions by submitting an issue.

For instructions on how to contribute code to FlexFlow Train, see CONTRIBUTING.md.

We welcome all contributions to FlexFlow Train from bug fixes to new features and extensions.

Citations

The Team

FlexFlow Train is developed and maintained by teams at CMU, Facebook, Los Alamos National Lab, MIT, Stanford, and UCSD (alphabetically).

License

FlexFlow Train uses Apache License 2.0.