Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add api paddle.device.cuda.empty_cache to release idle gpu memory hold by allocator。 #35427

Merged
merged 7 commits into from
Sep 14, 2021

Conversation

xiaolao
Copy link
Member

@xiaolao xiaolao commented Sep 3, 2021

PR types

New features

PR changes

APIs

Describe

Add API paddle.device.cuda.empty_cache to releases idle cached memory held by the allocator so that those can be used in other GPU application and visible in nvidia-smi. In most cases you don't need to use this function, Paddle does not release the memory back to the OS when you remove Tensors on the GPU, Because it keeps gpu memory in a pool so that next allocations can be done much faster.

exaples:

            import paddle

            # required: gpu
            paddle.set_device("gpu")
            # nvidia-smi
            tensor = paddle.randn([512, 512, 512], "float")
            del tensor
            # nvidia-smi
            paddle.device.cuda.empty_cache()

@CLAassistant
Copy link

CLAassistant commented Sep 3, 2021

CLA assistant check
All committers have signed the CLA.

@paddle-bot-old
Copy link

paddle-bot-old bot commented Sep 3, 2021

✅ This PR's description meets the template requirements!
Please wait for other CI results.

@paddle-bot-old
Copy link

paddle-bot-old bot commented Sep 3, 2021

Thanks for your contribution!
Please wait for the result of CI firstly. See Paddle CI Manual for details.

@xiaolao xiaolao closed this Sep 3, 2021
@xiaolao xiaolao reopened this Sep 3, 2021
XiaoguangHu01
XiaoguangHu01 previously approved these changes Sep 10, 2021
Copy link
Contributor

@XiaoguangHu01 XiaoguangHu01 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

sneaxiy
sneaxiy previously approved these changes Sep 13, 2021
Copy link
Collaborator

@sneaxiy sneaxiy left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM.

TCChenlong
TCChenlong previously approved these changes Sep 13, 2021
jzhang533
jzhang533 previously approved these changes Sep 13, 2021
Copy link
Contributor

@jzhang533 jzhang533 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm

Copy link
Contributor

@jzhang533 jzhang533 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

Copy link
Contributor

@lanxianghit lanxianghit left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM for API change

@tizhou86 tizhou86 merged commit 8393271 into PaddlePaddle:develop Sep 14, 2021
@xiaolao xiaolao deleted the empty_cache branch September 14, 2021 03:26
@xiaolao xiaolao restored the empty_cache branch September 14, 2021 07:27
AnnaTrainingG pushed a commit to AnnaTrainingG/Paddle that referenced this pull request Sep 29, 2021
…d by allocator。 (PaddlePaddle#35427)

* Add empty_cache api to release idle gpu memory hold by allocator,test=develop

* Add empty_cache api to release idle gpu memory hold by allocator,test=develop

* Add empty_cache api to release idle gpu memory hold by allocator,test=develop

* Fix test coverage problem for empty_cache

* delete redundant check for empty_cache

* fix the problem of empty_cache's doc

* delete the nvidia-smi comment in doc of empty_cache, test=document_fix
@xiaolao xiaolao deleted the empty_cache branch April 23, 2022 12:43
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

9 participants