Sample project for Build Reliable Machine Learning Pipelines with Continuous Integration.
CI/CD (Continuous Integration/Continuous Deployment) is an essential practice for any software development project, including machine learning projects. It offers several benefits, such as:
✅ Catching errors early: CI/CD facilitates the early identification of errors by automatically testing any code changes made, enabling timely problem detection during the development phase
✅ Better code quality: CI/CD promotes better code quality by ensuring that changes are thoroughly tested before they are merged into the main branch, making it easier to maintain the codebase over time.
✅ Faster time-to-market: CI/CD automates the build, testing, and deployment process, reducing the time it takes to release new models to production.
- Data scientists create and push new model to remote storage.
- Data scientists create pull request for changes.
- CI pipeline tests code and model.
- Changes are merged if all tests pass.
- Merged changes trigger CD pipeline for model deployment.
- DVC: Version data and experiments - article
- CML: Post a comment to the pull request showing the metrics and parameters of an experiment
- MLEM: Deploy ML models
src
: consists of Python scriptsdata
: consists of datatests
: consists of test filesmodel
: consists of ML modelsdvclive
: consists of metrics of DVC experiments.dvc/config
: consists of locations of the remote storageparams.yaml
: consists of parameters for Python scriptsdvc.yaml
: consists of data processes in the DVC pipeline.github/workflows
: consists of GitHub workflows
To try out this project, first start with creating a new repository using the template.
Clone the project to your local machine:
git clone https://github.com/your-username/cicd-mlops-demo
Set up the environment:
# Go to the project directory
cd cicd-mlops-demo
# Create a new branch
git checkout -b experiment
# Install dependencies
pip install -r requirements.txt
# Pull data from the remote storage location called read
dvc pull -r read
Make changes to any files in the following directories src
, tests
, params.yaml
. To demonstrate, we will make minor changes the file params.yaml
:
Create an experiment:
dvc exp run
After running the experiments, we need to store changes to our data and model remotely. One option is to use an S3 bucket as a remote storage.
Follow these steps to push your data and model to an S3 bucket:
- Create an S3 bucket
- Ensure your S3 credentials are store locally.
- Add the URI of your S3 bucket to the
.dvc/config
file
4. Push changes a remote location called read-write
using:
dvc push -r read-write
Add, commit, and push changes to the repository:
git add .
git commit -m 'change svm kernel'
git push origin experiment
Encrypted secrets allows you to store your sensitive information in your repository. We will use encrypted secrets to make AWS credentials and GitHub token accessible by GitHub Actions.
AWS credentials are necessary to pull data and model from your S3 bucket. Follow this tutorial to add AWS_ACCESS_KEY_ID
and AWS_SECRET_ACCESS_KEY
secrets to your repository.
GitHub token is necessary to write metrics and parameters as a comment in your pull request. To use GitHub token as an encrypted secret, follow these steps:
- Create a personal access token
- Create a secret named
TOKEN_GITHUB
- In the "Value" field, paste the token that you created in step 1.
Next, create a pull request.
The PR will trigger the CI pipeline to run tests. Once all tests passed, a comment will appear in the PR with the metrics and parameters of the new experiment.
Once the changes are merged, a CD pipeline will be triggered to deploy the ML model. Click the link under the "Deploy model" step to interact with the model.
Click "Try it out" to try out the model on a sample dataset.