Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ADD: callgraph + clean-up #9

Merged
merged 2 commits into from
Mar 20, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 4 additions & 1 deletion .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -149,5 +149,8 @@ dmypy.json
# Sequence and metadata
**/experiments/data/**

# personal scratchpad
# Any callgraphs
**.svg

# My personal scratchpad
x_*
9 changes: 9 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -25,6 +25,15 @@ To evaluate the best trial (trial < experiment) of all launched experiments, run
python q2_ritme/eval_best_trial_overall.py --model_path "experiments/models"
````

## Call graphs
To create a call graph for all functions in the package, run the following commands:
````
pip install pyan3==1.1.1

pyan3 q2_ritme/**/*.py --uses --no-defines --colored --grouped --annotated --svg --exclude 'q2_ritme/evaluate_all_experiments.py' --exclude 'q2_ritme/eval_best_trial_overall.py' --exclude 'q2_ritme._version' > call_graph.svg
````
(Note: different other options to create call graphs were tested such as code2flow and snakeviz. However, these although properly maintained didn't directly output call graphs such as pyan3 did.)

## Background
### Why ray tune?
"By using tuning libraries such as Ray Tune we can try out combinations of hyperparameters. Using sophisticated search strategies, these parameters can be selected so that they are likely to lead to good results (avoiding an expensive exhaustive search). Also, trials that do not perform well can be preemptively stopped to reduce waste of computing resources. Lastly, Ray Tune also takes care of training these runs in parallel, greatly increasing search speed." [source](https://docs.ray.io/en/latest/tune/examples/tune-xgboost.html#tune-xgboost-ref)
2 changes: 1 addition & 1 deletion q2_ritme/run_config.json
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@
"xgb",
"nn"
],
"num_trials": 1,
"num_trials": 2,
"path_to_ft": null,
"path_to_md": null,
"seed_data": 12,
Expand Down
4 changes: 2 additions & 2 deletions q2_ritme/run_n_eval_tune.py
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@ def parse_args():
return parser.parse_args()


def main(config_path):
def run_n_eval_tune(config_path):
with open(config_path, "r") as f:
config = json.load(f)

Expand Down Expand Up @@ -116,4 +116,4 @@ def main(config_path):

if __name__ == "__main__":
args = parse_args()
main(args.config)
run_n_eval_tune(args.config)
Loading