Skip to content

Commit

Permalink
Update submission generation steps (WIP)
Browse files Browse the repository at this point in the history
  • Loading branch information
arjunsuresh committed Jan 6, 2025
1 parent 194aeda commit bab97ff
Showing 1 changed file with 24 additions and 15 deletions.
39 changes: 24 additions & 15 deletions docs/submission/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,25 +5,18 @@ hide:



Click [here](https://youtu.be/eI1Hoecc3ho) to view the recording of the workshop: Streamlining your MLPerf Inference results using CM.

Click [here](https://docs.google.com/presentation/d/1cmbpZUpVr78EIrhzyMBnnWnjJrD-mZ2vmSb-yETkTA8/edit?usp=sharing) to view the prposal slide for Common Automation for MLPerf Inference Submission Generation through CM.

=== "CM based results"
If you have followed the `cm run` commands under the individual model pages in the [benchmarks](../index.md) directory, all the valid results will get aggregated to the `cm cache` folder. The following command could be used to browse the structure of inference results folder generated by CM.
### Get results folder structure
```bash
cm find cache --tags=get,mlperf,inference,results,dir | xargs tree
```
=== "Non CM based results"
=== "Custom automation based MLPerf results"
If you have not followed the `cm run` commands under the individual model pages in the [benchmarks](../index.md) directory, please make sure that the result directory is structured in the following way.
```
└── System description ID(SUT Name)
├── system_meta.json
└── Benchmark
└── Scenario
├── Performance
| └── run_x/#1 run for all scenarios
| └── run_1 run for all scenarios
| ├── mlperf_log_summary.txt
| └── mlperf_log_detail.txt
├── Accuracy
Expand All @@ -36,13 +29,13 @@ Click [here](https://docs.google.com/presentation/d/1cmbpZUpVr78EIrhzyMBnnWnjJrD
| | └── run_x/#1 run for all scenarios
| | ├── mlperf_log_summary.txt
| | └── mlperf_log_detail.txt
| ├── Accuracy
| | ├── baseline_accuracy.txt
| | ├── compliance_accuracy.txt
| ├── Accuracy # for TEST01 only
| | ├── baseline_accuracy.txt (if test fails in deterministic mode)
| | ├── compliance_accuracy.txt (if test fails in deterministic mode)
| | ├── mlperf_log_accuracy.json
| | └── accuracy.txt
| ├── verify_performance.txt
| └── verify_accuracy.txt #for TEST01 only
| └── verify_accuracy.txt # for TEST01 only
|── user.conf
└── measurements.json
```
Expand All @@ -61,13 +54,27 @@ Click [here](https://docs.google.com/presentation/d/1cmbpZUpVr78EIrhzyMBnnWnjJrD
```
</details>

=== "MLPerf Automation based results"
If you have followed the `cm run` commands under the individual model pages in the [benchmarks](../index.md) directory, all the valid results will get aggregated to the `cm cache` folder. The following command could be used to browse the structure of inference results folder generated by CM.
### Get results folder structure
```bash
cm find cache --tags=get,mlperf,inference,results,dir | xargs tree
```


Once all the results across all the models are ready you can use the following command to generate a valid submission tree compliant with the [MLPerf requirements](https://github.com/mlcommons/policies/blob/master/submission_rules.adoc#inference-1).

## Generate actual submission tree

=== "Multi-SUT submission"

=== "Using Local Folder Sync"
=== "Using a Github repo"

=== "Single SUT submission"

```mermaid
flowchart LR
classDef hidden fill:none,stroke:none;
subgraph Generation [Submission Generation]
direction TB
A[populate system details] --> B[generate submission structure]
Expand Down Expand Up @@ -168,9 +175,11 @@ Run the following command after **replacing `--repo_url` with your GitHub reposi

```bash
cm run script --tags=push,github,mlperf,inference,submission \
--repo_url=https://github.com/GATEOverflow/mlperf_inference_submissions_v4.1 \
--repo_url=https://github.com/mlcommons/mlperf_inference_submissions_v4.1 \
--commit_message="Results on <HW name> added by <Name>" \
--quiet
```

At the end, you can download the github repo and upload to the [MLCommons Submission UI](https://submissions-ui.mlcommons.org/submission).

Click [here](https://youtu.be/eI1Hoecc3ho) to view the recording of the workshop: Streamlining your MLPerf Inference results using CM.

0 comments on commit bab97ff

Please sign in to comment.