Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

新增4个显存监测API的中文文档(max_memory_allocated, max_memory_reserved, memory_allocated, memory_reserved) #4193

Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
30 changes: 30 additions & 0 deletions docs/api/paddle/device/cuda/max_memory_allocated_cn.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,30 @@
.. _cn_api_device_cuda_max_memory_allocated_cn:


max_memory_allocated
-------------------------------

.. py:function:: paddle.device.cuda.max_memory_allocated(device=None)

返回给定设备上分配给Tensor的显存峰值。

.. note::
Paddle中分配给Tensor的显存块大小会进行256字节对齐,因此可能大于Tensor实际需要的显存大小。例如,一个shape为[1]的float32类型Tensor会占用256字节的显存,即使存储一个floatt32类型数据实际只需要4字节。

参数
::::::::

**device** (paddle.CUDAPlace|int|str,可选) - 设备、设备ID或形如 ``gpu:x`` 的设备名称。如果 ``device`` 为None,则 ``device`` 为当前的设备。默认值为None。


返回
::::::::

一个整数,表示给定设备上分配给Tensor的显存峰值,以字节为单位。

代码示例
::::::::

COPY-FROM: paddle.device.cuda.max_memory_allocated


27 changes: 27 additions & 0 deletions docs/api/paddle/device/cuda/max_memory_reserved_cn.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,27 @@
.. _cn_api_device_cuda_max_memory_reserved_cn:


max_memory_reserved
-------------------------------

.. py:function:: paddle.device.cuda.max_memory_reserved(device=None)

返回给定设备上由Allocator管理的显存峰值。

参数
::::::::

**device** (paddle.CUDAPlace|int|str,可选) - 设备、设备ID或形如 ``gpu:x`` 的设备名称。如果 ``device`` 为None,则 ``device`` 为当前的设备。默认值为None。


返回
::::::::

一个整数,表示给定设备上当前由Allocator管理的显存峰值,以字节为单位。

代码示例
::::::::

COPY-FROM: paddle.device.cuda.max_memory_reserved


30 changes: 30 additions & 0 deletions docs/api/paddle/device/cuda/memory_allocated_cn.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,30 @@
.. _cn_api_device_cuda_memory_allocated_cn:


memory_allocated
-------------------------------

.. py:function:: paddle.device.cuda.memory_allocated(device=None)

返回给定设备上当前分配给Tensor的显存大小。

.. note::
Paddle中分配给Tensor的显存块大小会进行256字节对齐,因此可能大于Tensor实际需要的显存大小。例如,一个shape为[1]的float32类型Tensor会占用256字节的显存,即使存储一个floatt32类型数据实际只需要4字节。

参数
::::::::

**device** (paddle.CUDAPlace|int|str,可选) - 设备、设备ID或形如 ``gpu:x`` 的设备名称。如果 ``device`` 为None,则 ``device`` 为当前的设备。默认值为None。


返回
::::::::

一个整数,表示给定设备上当前分配给Tensor的显存大小,以字节为单位。

代码示例
::::::::

COPY-FROM: paddle.device.cuda.memory_allocated


27 changes: 27 additions & 0 deletions docs/api/paddle/device/cuda/memory_reserved_cn.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,27 @@
.. _cn_api_device_cuda_memory_reserved_cn:


memory_reserved
-------------------------------

.. py:function:: paddle.device.cuda.memory_reserved(device=None)

返回给定设备上当前由Allocator管理的显存大小。

参数
::::::::

**device** (paddle.CUDAPlace|int|str,可选) - 设备、设备ID或形如 ``gpu:x`` 的设备名称。如果 ``device`` 为None,则 ``device`` 为当前的设备。默认值为None。


返回
::::::::

一个整数,表示给定设备上当前由Allocator管理的显存大小,以字节为单位。

代码示例
::::::::

COPY-FROM: paddle.device.cuda.memory_reserved