We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
我的项目是基于mmdeploy-runtime进行部署的后端为使用mmdeploy转换的TensorRT模型, 项目存在多个模型, 并需要根据需求切换模型, 考虑到显存问题, 我查阅了资料, 但是没有找到类似torch.empty_cache()或paddle.clear_intermediate_tensor() + self.predictor.try_shrink_memory()手动释放显存占用得方法, 请问官方能否提供一下类似的方法?
无
The text was updated successfully, but these errors were encountered:
No branches or pull requests
Checklist
Describe the bug
我的项目是基于mmdeploy-runtime进行部署的后端为使用mmdeploy转换的TensorRT模型, 项目存在多个模型, 并需要根据需求切换模型, 考虑到显存问题, 我查阅了资料, 但是没有找到类似torch.empty_cache()或paddle.clear_intermediate_tensor() + self.predictor.try_shrink_memory()手动释放显存占用得方法, 请问官方能否提供一下类似的方法?
Reproduction
无
Environment
Error traceback
The text was updated successfully, but these errors were encountered: