-
Notifications
You must be signed in to change notification settings - Fork 673
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update limits for SDXL benchmarks. #20138
Update limits for SDXL benchmarks. #20138
Conversation
This reflects improvements in the compiler for these models. Signed-off-by: MaheshRavishankar <mravisha@amd.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good! We are migrating these benchmark tests over to iree-test-suites, so i will also update those values there
Continuing the discussion from #20124 (comment), there should be a single source of truth for such metrics. If we move the source of truth to the test suite repository then workflows running in IREE cannot report failures if those metrics are changed by the code in a pull request. I've been suggesting we keep the source of truth in this repository for both compilation/run flags (including any extra files like tuning specs) and expected metrics and benchmark results. That means generalizing/parameterizing the test suite so repositories like this one can bring their own configurations without needing to modify the test suite itself, as is already done in the other test suites in that repository like the ONNX op and model test suites. We can also move the source of truth to a database (not in either repository), so long as we also update the test/benchmark behavior to be advisory ("these benchmark results changed, merging means you accept the new values") and not blocking ("these benchmark results changed, that means the workflow failed"). |
(^ yep - if the configs are not in this repo then none of the workflows can block anything in the repo, and they are likely to be in a perpetual red state - for them to be a useful signal to development they must be in the repo so they can be updated along with any commit landing to main that changes the expectations) |
Could I get a stamp though @ScottTodd or @geomin12 |
As mentioned per [previous discussions](iree-org/iree#20138 (comment)), it would be ideal and sensible to have the data to be customizable per the user and sourced within IREE repo. From [this commit in IREE](iree-org/iree@5ea6faa), the JSON configs, MLIR tuner + E2E files now live in IREE. These pytest scripts now generate test cases and configure external files from an external source specified by arguments. FYI: the linting fixed up some files, so some of them are irrelevant to this PR but clean files are always good
This reflects improvements in the compiler for these models. Signed-off-by: MaheshRavishankar <mravisha@amd.com> Signed-off-by: geomin12 <geomin12@amd.com>
This reflects improvements in the compiler for these models.