-
Notifications
You must be signed in to change notification settings - Fork 16
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Attempt to get a name for a Tensor without names #89
Comments
Looks like in some place you are trying to access inputs / outputs by tensor names, but they are not set in your test models. I see here openvino_backend/src/openvino.cc Lines 513 to 516 in 0a3fb4c
you are working with tensor names and get_any_name() here throws an exception.
|
If your model doesn't have named outputs, the following code can add the names:
|
@dtrawins - the model prep scripts for the qa models is here: triton-inference-server/server@71ca0c5 Can you recommend where the changes can be added there? longer term - maybe the model generation for qa models for openvino could be moved to the backend (@nvda-mesharma for viz if that makes sense / is feasible) |
@nnshah1 Yes. I have verified the workaround provided by @dtrawins works. @dtrawins I noticed that another workaround is to revert the network version in |
@nnshah1 there were some changes in the model format to align the API with the models like from pytorch which allows outputs to be without a name. They can be accessed via an index. In order to use them in applications which requires a name, it has to be set explicitly. That can be done via output.get_tensor().set_names(name). I wasn't able to confirm if there an an alternative with setting it directly in openvino operations. Anyway the workaround I sent earlier should be valid. Your WA with downgrading the version indeed works but it is undocumented so I wouldn't call it recommended because for some models it might fail. |
@yinggeh, @dtrawins I think in this case we should add documentation that the backend currently requires models with named tensors (IIUC this is a limitation imposed by the backend and not the openvino framework). And we should update our test model generation - to the latest. I also don't recommend downgrading the network version - we should align the model build and runtimes if we can. @dtrawins - it would be good to get your feedback on the test model scripts and perhaps find a way to define and maintain that set in the openvino backend itself (@nvda-mesharma - does that fit our general plans of making the backends more independent from a build and test?) |
Also if this is resolved for this release - let's convert this into a known issue and feature request for the backend - |
@nnshah1 I am working on updating OpenVINO model generation script for this release. |
I raised a PR triton-inference-server/server#7892 with output name assignment. Target for Triton release 25.01. |
Hi experts. Recently Triton team upgrades OpenVINO version from 23.3.0 to 24.4.0 for model generation. OpenVINO Inference Engine Python API is deprecated in the 2024.0 release (source). Therefore, we updated our OpenVINO model generation scripts with the new API. See commit 71ca0c5.
After the change, Triton server failed to load all new OpenVINO models with the following error (showing one as an example). Please advise.
cc @bstrzele @dtrawins @ilya-lavrenov @mc-nv
The text was updated successfully, but these errors were encountered: