You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is your feature request related to a problem? Please describe.
TRT cache gets regenerated whenever the model path changes. This is an issue when model file override is used. There has been many similar feature requests:
If the path changes but the same model is used, this will result in the cache to get regenerated.
Describe the solution you'd like
There could be two solutions to this issue:
Always use the binary stream in ORT as the key to find the TRT cache. This change would not require any changes in the ORT backend.
Add an option named "ONNXRUNTIME_LOAD_MODEL_FROM_PATH" to the ONNXRuntime backend. This would provide an opt-in option to whether the user wants to use binary mode or load the model from path. If the user wants to make sure the TRT engine cache is used properly, they would need to set this option to "off". Always loading the models from binary doesn't work since it breaks the models that require external weight files. In this mode, the user still would not be able to use TRT cache if the model requires external weight files.
Is your feature request related to a problem? Please describe.
TRT cache gets regenerated whenever the model path changes. This is an issue when model file override is used. There has been many similar feature requests:
triton-inference-server/server#4587
#126 (comment)
The problem is that it seems like ORT internally uses model path as the key to the cache if it exists:
https://github.com/microsoft/onnxruntime/blob/a433f22f17e59671ff01acf0d270b7e3476a952a/onnxruntime/core/framework/execution_provider.cc#L147-L148
If the path changes but the same model is used, this will result in the cache to get regenerated.
Describe the solution you'd like
There could be two solutions to this issue:
Always use the binary stream in ORT as the key to find the TRT cache. This change would not require any changes in the ORT backend.
Add an option named "ONNXRUNTIME_LOAD_MODEL_FROM_PATH" to the ONNXRuntime backend. This would provide an opt-in option to whether the user wants to use binary mode or load the model from path. If the user wants to make sure the TRT engine cache is used properly, they would need to set this option to "off". Always loading the models from binary doesn't work since it breaks the models that require external weight files. In this mode, the user still would not be able to use TRT cache if the model requires external weight files.
CC @GuanLuo @tanmayv25 @dzier
The text was updated successfully, but these errors were encountered: