Skip to content

Commit

Permalink
test-commit
Browse files Browse the repository at this point in the history
  • Loading branch information
anandhu-eng authored Jan 6, 2025
1 parent bab97ff commit 076e707
Showing 1 changed file with 81 additions and 24 deletions.
105 changes: 81 additions & 24 deletions docs/submission/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,20 +3,93 @@ hide:
- toc
---

```mermaid
flowchart LR
classDef hidden fill:none,stroke:none;
subgraph Generation1 [Submission Generation 1]
direction TB
A1[ ] --> B1[ ]
B1 --> C1[ ]
C1 --> D1{ }
D1 -- yes --> E1[ ]
D1 -- no --> F1[ ]
E1 --> F1
end
subgraph Generation2 [Submission Generation 2]
direction TB
A2[ ] --> B2[ ]
B2 --> C2[ ]
C2 --> D2{ }
D2 -- yes --> E2[ ]
D2 -- no --> F2[ ]
E2 --> F2
end
subgraph Generation3 [Submission Generation 3]
direction TB
A3[populate system details] --> B3[generate submission structure]
B3 --> C3[truncate-accuracy-logs]
C3 --> D3{Infer low talency results and/or filter out invalid results}
D3 -- yes --> E3[preprocess-mlperf-inference-submission]
D3 -- no --> F3[run-mlperf-inference-submission-checker]
E3 --> F3
end
subgraph subelement1
Generation1 --> H[Submission TAR file]
H --> SubOutput((Upload to GitHub))
end
subgraph subelement2
Generation2 --> H[Submission TAR file]
H --> SubOutput((Upload to GitHub))
end
subgraph subelement3
Generation3 --> H[Submission TAR file]
H --> SubOutput((Upload to GitHub))
end
Input((MLPerf Inference Results folder)) --> subelement3
Input --> subelement2
Input --> subelement1
subgraph LargeCircle [ ]
direction TB
subelement1
subelement2
subelement3
end
LargeCircle --> G[Organise results from multiple SUT]
G --> I[Upload results to submission server]
I --> Output((Receive validation email))
```


Click [here](https://youtu.be/eI1Hoecc3ho) to view the recording of the workshop: Streamlining your MLPerf Inference results using CM.

Click [here](https://docs.google.com/presentation/d/1cmbpZUpVr78EIrhzyMBnnWnjJrD-mZ2vmSb-yETkTA8/edit?usp=sharing) to view the prposal slide for Common Automation for MLPerf Inference Submission Generation through CM.

=== "Custom automation based MLPerf results"
=== "CM based results"
If you have followed the `cm run` commands under the individual model pages in the [benchmarks](../index.md) directory, all the valid results will get aggregated to the `cm cache` folder. The following command could be used to browse the structure of inference results folder generated by CM.
### Get results folder structure
```bash
cm find cache --tags=get,mlperf,inference,results,dir | xargs tree
```
=== "Non CM based results"
If you have not followed the `cm run` commands under the individual model pages in the [benchmarks](../index.md) directory, please make sure that the result directory is structured in the following way.
```
└── System description ID(SUT Name)
├── system_meta.json
└── Benchmark
└── Scenario
├── Performance
| └── run_1 run for all scenarios
| └── run_x/#1 run for all scenarios
| ├── mlperf_log_summary.txt
| └── mlperf_log_detail.txt
├── Accuracy
Expand All @@ -29,13 +102,13 @@ Click [here](https://docs.google.com/presentation/d/1cmbpZUpVr78EIrhzyMBnnWnjJrD
| | └── run_x/#1 run for all scenarios
| | ├── mlperf_log_summary.txt
| | └── mlperf_log_detail.txt
| ├── Accuracy # for TEST01 only
| | ├── baseline_accuracy.txt (if test fails in deterministic mode)
| | ├── compliance_accuracy.txt (if test fails in deterministic mode)
| ├── Accuracy
| | ├── baseline_accuracy.txt
| | ├── compliance_accuracy.txt
| | ├── mlperf_log_accuracy.json
| | └── accuracy.txt
| ├── verify_performance.txt
| └── verify_accuracy.txt # for TEST01 only
| └── verify_accuracy.txt #for TEST01 only
|── user.conf
└── measurements.json
```
Expand All @@ -54,27 +127,13 @@ Click [here](https://docs.google.com/presentation/d/1cmbpZUpVr78EIrhzyMBnnWnjJrD
```
</details>

=== "MLPerf Automation based results"
If you have followed the `cm run` commands under the individual model pages in the [benchmarks](../index.md) directory, all the valid results will get aggregated to the `cm cache` folder. The following command could be used to browse the structure of inference results folder generated by CM.
### Get results folder structure
```bash
cm find cache --tags=get,mlperf,inference,results,dir | xargs tree
```


Once all the results across all the models are ready you can use the following command to generate a valid submission tree compliant with the [MLPerf requirements](https://github.com/mlcommons/policies/blob/master/submission_rules.adoc#inference-1).

## Generate actual submission tree

=== "Multi-SUT submission"

=== "Using Local Folder Sync"
=== "Using a Github repo"

=== "Single SUT submission"

```mermaid
flowchart LR
classDef hidden fill:none,stroke:none;
subgraph Generation [Submission Generation]
direction TB
A[populate system details] --> B[generate submission structure]
Expand Down Expand Up @@ -175,11 +234,9 @@ Run the following command after **replacing `--repo_url` with your GitHub reposi

```bash
cm run script --tags=push,github,mlperf,inference,submission \
--repo_url=https://github.com/mlcommons/mlperf_inference_submissions_v4.1 \
--repo_url=https://github.com/GATEOverflow/mlperf_inference_submissions_v4.1 \
--commit_message="Results on <HW name> added by <Name>" \
--quiet
```

At the end, you can download the github repo and upload to the [MLCommons Submission UI](https://submissions-ui.mlcommons.org/submission).

Click [here](https://youtu.be/eI1Hoecc3ho) to view the recording of the workshop: Streamlining your MLPerf Inference results using CM.

0 comments on commit 076e707

Please sign in to comment.