Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug] mmdeploy如何手动释放显存占用? #2818

Open
2 of 3 tasks
liuky74 opened this issue Aug 26, 2024 · 0 comments
Open
2 of 3 tasks

[Bug] mmdeploy如何手动释放显存占用? #2818

liuky74 opened this issue Aug 26, 2024 · 0 comments

Comments

@liuky74
Copy link

liuky74 commented Aug 26, 2024

Checklist

  • I have searched related issues but cannot get the expected help.
  • 2. I have read the FAQ documentation but cannot get the expected help.
  • 3. The bug has not been fixed in the latest version.

Describe the bug

我的项目是基于mmdeploy-runtime进行部署的后端为使用mmdeploy转换的TensorRT模型, 项目存在多个模型, 并需要根据需求切换模型, 考虑到显存问题, 我查阅了资料, 但是没有找到类似torch.empty_cache()或paddle.clear_intermediate_tensor() + self.predictor.try_shrink_memory()手动释放显存占用得方法, 请问官方能否提供一下类似的方法?

Reproduction

Environment

Error traceback

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant