From 076e707eada4e7efb85fc5563fd45a8f04066b84 Mon Sep 17 00:00:00 2001 From: ANANDHU S <71482562+anandhu-eng@users.noreply.github.com> Date: Mon, 6 Jan 2025 19:01:28 +0530 Subject: [PATCH] test-commit --- docs/submission/index.md | 105 ++++++++++++++++++++++++++++++--------- 1 file changed, 81 insertions(+), 24 deletions(-) diff --git a/docs/submission/index.md b/docs/submission/index.md index d5a86f10a..cefe99245 100644 --- a/docs/submission/index.md +++ b/docs/submission/index.md @@ -3,12 +3,85 @@ hide: - toc --- +```mermaid +flowchart LR + classDef hidden fill:none,stroke:none; + + subgraph Generation1 [Submission Generation 1] + direction TB + A1[ ] --> B1[ ] + B1 --> C1[ ] + C1 --> D1{ } + D1 -- yes --> E1[ ] + D1 -- no --> F1[ ] + E1 --> F1 + end + + subgraph Generation2 [Submission Generation 2] + direction TB + A2[ ] --> B2[ ] + B2 --> C2[ ] + C2 --> D2{ } + D2 -- yes --> E2[ ] + D2 -- no --> F2[ ] + E2 --> F2 + end + + subgraph Generation3 [Submission Generation 3] + direction TB + A3[populate system details] --> B3[generate submission structure] + B3 --> C3[truncate-accuracy-logs] + C3 --> D3{Infer low talency results and/or filter out invalid results} + D3 -- yes --> E3[preprocess-mlperf-inference-submission] + D3 -- no --> F3[run-mlperf-inference-submission-checker] + E3 --> F3 + end + + subgraph subelement1 + Generation1 --> H[Submission TAR file] + H --> SubOutput((Upload to GitHub)) + end + + subgraph subelement2 + Generation2 --> H[Submission TAR file] + H --> SubOutput((Upload to GitHub)) + end + + subgraph subelement3 + Generation3 --> H[Submission TAR file] + H --> SubOutput((Upload to GitHub)) + end + + Input((MLPerf Inference Results folder)) --> subelement3 + Input --> subelement2 + Input --> subelement1 + + subgraph LargeCircle [ ] + direction TB + subelement1 + subelement2 + subelement3 + end + + LargeCircle --> G[Organise results from multiple SUT] + G --> I[Upload results to submission server] + I --> Output((Receive validation email)) +``` + + +Click [here](https://youtu.be/eI1Hoecc3ho) to view the recording of the workshop: Streamlining your MLPerf Inference results using CM. Click [here](https://docs.google.com/presentation/d/1cmbpZUpVr78EIrhzyMBnnWnjJrD-mZ2vmSb-yETkTA8/edit?usp=sharing) to view the prposal slide for Common Automation for MLPerf Inference Submission Generation through CM. -=== "Custom automation based MLPerf results" +=== "CM based results" + If you have followed the `cm run` commands under the individual model pages in the [benchmarks](../index.md) directory, all the valid results will get aggregated to the `cm cache` folder. The following command could be used to browse the structure of inference results folder generated by CM. + ### Get results folder structure + ```bash + cm find cache --tags=get,mlperf,inference,results,dir | xargs tree + ``` +=== "Non CM based results" If you have not followed the `cm run` commands under the individual model pages in the [benchmarks](../index.md) directory, please make sure that the result directory is structured in the following way. ``` └── System description ID(SUT Name) @@ -16,7 +89,7 @@ Click [here](https://docs.google.com/presentation/d/1cmbpZUpVr78EIrhzyMBnnWnjJrD └── Benchmark └── Scenario ├── Performance - | └── run_1 run for all scenarios + | └── run_x/#1 run for all scenarios | ├── mlperf_log_summary.txt | └── mlperf_log_detail.txt ├── Accuracy @@ -29,13 +102,13 @@ Click [here](https://docs.google.com/presentation/d/1cmbpZUpVr78EIrhzyMBnnWnjJrD | | └── run_x/#1 run for all scenarios | | ├── mlperf_log_summary.txt | | └── mlperf_log_detail.txt - | ├── Accuracy # for TEST01 only - | | ├── baseline_accuracy.txt (if test fails in deterministic mode) - | | ├── compliance_accuracy.txt (if test fails in deterministic mode) + | ├── Accuracy + | | ├── baseline_accuracy.txt + | | ├── compliance_accuracy.txt | | ├── mlperf_log_accuracy.json | | └── accuracy.txt | ├── verify_performance.txt - | └── verify_accuracy.txt # for TEST01 only + | └── verify_accuracy.txt #for TEST01 only |── user.conf └── measurements.json ``` @@ -54,27 +127,13 @@ Click [here](https://docs.google.com/presentation/d/1cmbpZUpVr78EIrhzyMBnnWnjJrD ``` -=== "MLPerf Automation based results" - If you have followed the `cm run` commands under the individual model pages in the [benchmarks](../index.md) directory, all the valid results will get aggregated to the `cm cache` folder. The following command could be used to browse the structure of inference results folder generated by CM. - ### Get results folder structure - ```bash - cm find cache --tags=get,mlperf,inference,results,dir | xargs tree - ``` - - Once all the results across all the models are ready you can use the following command to generate a valid submission tree compliant with the [MLPerf requirements](https://github.com/mlcommons/policies/blob/master/submission_rules.adoc#inference-1). ## Generate actual submission tree -=== "Multi-SUT submission" - - === "Using Local Folder Sync" - === "Using a Github repo" - -=== "Single SUT submission" - ```mermaid flowchart LR + classDef hidden fill:none,stroke:none; subgraph Generation [Submission Generation] direction TB A[populate system details] --> B[generate submission structure] @@ -175,11 +234,9 @@ Run the following command after **replacing `--repo_url` with your GitHub reposi ```bash cm run script --tags=push,github,mlperf,inference,submission \ - --repo_url=https://github.com/mlcommons/mlperf_inference_submissions_v4.1 \ + --repo_url=https://github.com/GATEOverflow/mlperf_inference_submissions_v4.1 \ --commit_message="Results on added by " \ --quiet ``` At the end, you can download the github repo and upload to the [MLCommons Submission UI](https://submissions-ui.mlcommons.org/submission). - -Click [here](https://youtu.be/eI1Hoecc3ho) to view the recording of the workshop: Streamlining your MLPerf Inference results using CM.