Skip to content

TRT Engine Cache Regeneration Issue #145

Open
@Tabrizian

Description

@Tabrizian

Is your feature request related to a problem? Please describe.

TRT cache gets regenerated whenever the model path changes. This is an issue when model file override is used. There has been many similar feature requests:

triton-inference-server/server#4587
#126 (comment)

The problem is that it seems like ORT internally uses model path as the key to the cache if it exists:

https://github.com/microsoft/onnxruntime/blob/a433f22f17e59671ff01acf0d270b7e3476a952a/onnxruntime/core/framework/execution_provider.cc#L147-L148

If the path changes but the same model is used, this will result in the cache to get regenerated.

Describe the solution you'd like

There could be two solutions to this issue:

  1. Always use the binary stream in ORT as the key to find the TRT cache. This change would not require any changes in the ORT backend.

  2. Add an option named "ONNXRUNTIME_LOAD_MODEL_FROM_PATH" to the ONNXRuntime backend. This would provide an opt-in option to whether the user wants to use binary mode or load the model from path. If the user wants to make sure the TRT engine cache is used properly, they would need to set this option to "off". Always loading the models from binary doesn't work since it breaks the models that require external weight files. In this mode, the user still would not be able to use TRT cache if the model requires external weight files.

CC @GuanLuo @tanmayv25 @dzier

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions