Step 1: Open three Issues to track the status of the three groups in Exercise B.
Title: B/CEO
Assignees: the CEO group
Description:
- [ ] open all issues for exercise B
- [ ] exercise A is correct
- [ ] Readme
- [ ] pylint
Title: B/Programmer
Assignees: the Programmer group
Description:
- [ ] Update `SkiJump`
- [ ] Extend unit tests
- [ ] generate new datasets
Title: B/Engineer
Assignees: the Engineer group
Description:
- [ ] Update fit
- [ ] Update plot
- [ ] Extended plot is available
- [ ] CLI is available
Step 2: Double-check Exercise A
Double-check the work of the Programmer and the Engineer from Exercise A: i.e.
- there are several theories in
config/
- there are several datasets in
data/
- each theory reproduces its matching dataset (i.e. the data generation of the Programmer is correct)
- a fit to each dataset yields back the underlying theory (i.e. the fitting routine of the Engineer is correct)
- the plots of the fits look reasonable
- there are unit tests for all features in
src/generate.py
and all unit tests pass, i.e.$ pytest tests/
yields all green (if you have the proof in the CI even better!)
- Open a new issue at the template repository and upload a few datasets there without telling the underlying theory
- Check for other available datasets there, download them, and then determine the model parameters. Track your results in an Issue in your repository.
Step 3: Update the main Readme of the repository to something more meaningful
This can, e.g., include
- what the repository is about,
- implemented features,
- badges are very popular, e.g. for Workflows,
- ...
Another useful tool while developing software are linters. They perform a static code analysis and can this way find trivial bugs, e.g. invalid variable names and incorrect scopes, or avoid duplication. The most popular tool in Python is pylint.
Step 4: Add pylint
to the tool stack
This includes:
- installing it (remember the
requirements.txt
file!) - running it and fixing possible errors
- add it to the CI