Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Post Mortem v2.1] Add README to submission directory #1247

Merged
merged 1 commit into from
Oct 10, 2022
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
81 changes: 81 additions & 0 deletions tools/submission/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,81 @@
# Tools to check Submissions

## `filter-errors.py` (Deprecated)
### Summary
Tool to remove manually verified ERRORs from the log file in the v0.7 submission.

## `generate-final-report.py`
### Inputs
**input**: Path to .csv output file of the [submission checker](#submissioncheckerpy)
### Summary
Generates the spreadsheet with the format in which the final results will be published. This script can be used by running the following command:
```
python3 generate-final-report.py --input <path-to-csv>
```
### Outputs
Spreadsheet with the results.

## `log_parser.py`
### Summary
Helper module for the submission checker. It parses the logs containing the results of the benchmark.

## `pack_submission.sh` (Deprecated)
### Summary
Creates an encrypted tarball and generate the SHA1 of the tarball. Currently submissions do not need to be encrypted.

## `repository_checks.sh`
### Inputs
Takes as input the path of the directory to run the checks on.
### Summary
Checks that a directory containing one or several submission is able to be uploaded to github. This script can be used by running the following command:
```
./repository_checks.sh <path-to-folder>
```
### Outputs
Logs in console the errors that could cause problem uploading the submission to github.

## `submission_checker.py`
### Inputs
**input**: Path to directory containing one or several submissions.<br>
**version**: Checker version. E.g v1.1, v2.0, v2.1. <br>
**submitter**: Filter submitters and only run the checks for some specific submitter. <br>
**csv**: Output path where the csv with the results will be stored. E.g `results/summary.csv`. <br>
**skip_compliance**: Flag to skip compliance checks. <br>
**extra-model-benchmark-map**: Extra mapping for model name to benchmarks. E.g `retinanet:ssd-large;efficientnet:ssd-small`<br>
**submission-exceptions**: Flag to ignore errors in submissions<br>
**more-power-check**: Flag to run the check for power submissions <br>

### Summary
Checks a directory that contains one or several submission. This script can be used by running the following command:
```
python3 submission-checker.py --input <path-to-folder>
[--version <version>]
[--submitter <submitter-name>]
[--csv <path-to-output>]
[--skip_compliance]
[--extra-model-benchmark-map <extra-mapping-string>]
[--submission-exceptions]
[--more-power-check]
```

### Outputs
- CSV file containing all the valid results in the directory.
- It raises several errors and logs the results that are invalid.

## `truncate_accuracy_log.py`
### Inputs
**input**: Path to directory containing your submission <br>
**output**: Path to directory to output the submission with truncated files <br>
**submitter**: Organization name <br>
**backup**: Path to directory to store an unmodified copy of the truncated files <br>
### Summary
Takes a directory containing a submission and trucates `mlperf_log_accuracy.json` files. There are two ways to use this script. First, we could create a new submission directory with the truncated files by running:
```
python truncate_accuracy_log.py --input <original_submission_directory> --submitter <organization_name> --output <new_submission_directory>
```
Second, we could truncate the desired files and place and store a copy of the unmodified files in the backup repository.
```
python tools/submission/truncate_accuracy_log.py --input <original_submission_directory> --submitter <organization_name> --backup <safe_directory>
```
### Outputs
Output directory with submission with trucated `mlperf_log_accuracy.json` files