-
Notifications
You must be signed in to change notification settings - Fork 645
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[onnx] Build real onnx frontend cli/API #18289
Labels
Comments
ScottTodd
added
integrations/onnx
ONNX integration work
quality of life 😊
Nice things are nice; let's have some
labels
Aug 19, 2024
Couple of more items needed:
|
stellaraccident
added a commit
to nod-ai/shark-ai
that referenced
this issue
Sep 6, 2024
This test is not particularly inspired (and the API needs to be simplified) but it represents the first full system test in the repo. In order to run the test, it is downloading a mobilenet onnx file from the zoo, upgrading it, and compiling. In the future, I'd like to switch this to a simpler model like MNIST for basic functionality, but I had some issues getting that to work via ONNX import and punted. While a bit inefficient (it will fetch on each pytest run), this will keep things held together until we can do something more comprehensive. Note that my experience here prompted me to file iree-org/iree#18289, as this is way too much code and sharp edges to compile from ONNX (but it does work). Verifies numerics against a silly test image. Includes some fixes: * Reworked the system detect marker so that we only run system specific tests (like amdgpu) on opt-in via a `--system amdgpu` pytest arg. This refinement was prompted by an ASAN violation in the HIP runtime code which was tripping me up when enabled by default. Filed here: iree-org/iree#18449 * Fixed a bug revealed when writing the test where an exception thrown from main could trigger a use-after-free because we were clearing workers when shutting down (vs at destruction) when all objects owned at the system level need to have a lifetime no less than the system.
stellaraccident
added a commit
to nod-ai/shark-ai
that referenced
this issue
Sep 6, 2024
This test is not particularly inspired (and the API needs to be simplified) but it represents the first full system test in the repo. In order to run the test, it is downloading a mobilenet onnx file from the zoo, upgrading it, and compiling. In the future, I'd like to switch this to a simpler model like MNIST for basic functionality, but I had some issues getting that to work via ONNX import and punted. While a bit inefficient (it will fetch on each pytest run), this will keep things held together until we can do something more comprehensive. Note that my experience here prompted me to file iree-org/iree#18289, as this is way too much code and sharp edges to compile from ONNX (but it does work). Verifies numerics against a silly test image. Includes some fixes: * Reworked the system detect marker so that we only run system specific tests (like amdgpu) on opt-in via a `--system amdgpu` pytest arg. This refinement was prompted by an ASAN violation in the HIP runtime code which was tripping me up when enabled by default. Filed here: iree-org/iree#18449 * Fixed a bug revealed when writing the test where an exception thrown from main could trigger a use-after-free because we were clearing workers when shutting down (vs at destruction) when all objects owned at the system level need to have a lifetime no less than the system.
Some of my own observations from a day of coding:
|
ScottTodd
added a commit
to iree-org/iree-test-suites
that referenced
this issue
Sep 19, 2024
Progress on #6. A sample test report HTML file is available here: https://scotttodd.github.io/iree-test-suites/onnx_models/report_2024_09_17.html These new tests * Download models from https://github.com/onnx/models * Extract metadata from the models to determine which functions to call with random data * Run the models through [ONNX Runtime](https://onnxruntime.ai/) as a reference implementation * Import the models using `iree-import-onnx` (until we have a better API: iree-org/iree#18289) * Compile the models using `iree-compile` (currently just for `llvm-cpu` but this could be parameterized later) * Run the models using `iree-run-module`, checking outputs using `--expected_output` and the reference data Tests are written in Python using a set of pytest helper functions. As the tests run, they can log details about what commands they are running. When run locally, the `artifacts/` directory will contain all the relevant files. More can be done in follow-up PRs to improve the ergonomics there (like generating flagfiles). Each test case can use XFAIL like `@pytest.mark.xfail(raises=IreeRunException)`. As we test across multiple backends or want to configure the test suite from another repo (e.g. [iree-org/iree](https://github.com/iree-org/iree)), we can explore more expressive marks. Note that unlike the ONNX _operator_ tests, these tests use `onnxruntime` and `iree-import-onnx` at test time. The operator tests handle that as an infrequently ran offline step. We could do something similar here, but the test inputs and outputs can be rather large for real models and that gets into Git LFS or cloud storage territory. If this test authoring model works well enough, we can do something similar for other ML frameworks like TFLite (#5).
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Labels
The current ONNX compilation flow is serviceable but somewhat user hostile, directly requiring too much low-level manipulation of the system. Recommend creating a real ONNX frontend API and CL which handles all of the following properly. This would replace the existing frontdoor here: https://iree.dev/guides/ml-frameworks/onnx/
For the record, when I took myself down this journey recently, I started with something simple and ended up with this: https://gist.github.com/stellaraccident/1b3366c129c3bc8e7293fb1353254407
The text was updated successfully, but these errors were encountered: