-
Notifications
You must be signed in to change notification settings - Fork 372
feat: improve engine caching and fix bugs #3932
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
|
@cehongwang please take a pass so we have multiple eyes on this PR |
a54907e to
ea81677
Compare
| logger.warning( | ||
| "require_full_compilation arg is not applicable for torch.compile with backend='torch_tensorrt" | ||
| ) | ||
| if settings.strip_engine_weights: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When would a torch.compile use try to use strip weights?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
added the warning back. Not sure why strip_engine_weights arg doesn't work for torch.compile()
| logger.info(f"The engine already exists in cache for hash: {hash_val}") | ||
| return False | ||
|
|
||
| if not settings.strip_engine_weights: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I feel like strip weights should only apply to the returned engine and not to the cache directly. So a returned cache engine with strip weights == True wont be refit. but you always only save stripped engine.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The current design is to save stripped engine for using less hard disk. A returned cache engine with strip weights == True wont be refit as well. Only strip weights == False will be refit.
Description
As I requested, TensorRT 10.14 added an argument
trt.SerializationFlag.INCLUDE_REFITto allow refitted engines to keep refittable. That means engines can be refitted multiple times. Based on the capability, this PR enhances the existing engine caching and refitting features as follows:compilation_settings.strip_engine_weights. Then, when users pull out the cached engine, it will be automatically refitted and kept refittable.refit_module_weights(). e.g.:_conversion.py.Type of change
Checklist: