-
Notifications
You must be signed in to change notification settings - Fork 766
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
新增4个显存监测API的中文文档(max_memory_allocated, max_memory_reserved, memory_al…
…located, memory_reserved) (#4193) * Add CN docs for new GPU memory monitoring APIs * Fix typo
- Loading branch information
Showing
4 changed files
with
114 additions
and
0 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,30 @@ | ||
.. _cn_api_device_cuda_max_memory_allocated_cn: | ||
|
||
|
||
max_memory_allocated | ||
------------------------------- | ||
|
||
.. py:function:: paddle.device.cuda.max_memory_allocated(device=None) | ||
返回给定设备上分配给Tensor的显存峰值。 | ||
|
||
.. note:: | ||
Paddle中分配给Tensor的显存块大小会进行256字节对齐,因此可能大于Tensor实际需要的显存大小。例如,一个shape为[1]的float32类型Tensor会占用256字节的显存,即使存储一个floatt32类型数据实际只需要4字节。 | ||
|
||
参数 | ||
:::::::: | ||
|
||
**device** (paddle.CUDAPlace|int|str,可选) - 设备、设备ID或形如 ``gpu:x`` 的设备名称。如果 ``device`` 为None,则 ``device`` 为当前的设备。默认值为None。 | ||
|
||
|
||
返回 | ||
:::::::: | ||
|
||
一个整数,表示给定设备上分配给Tensor的显存峰值,以字节为单位。 | ||
|
||
代码示例 | ||
:::::::: | ||
|
||
COPY-FROM: paddle.device.cuda.max_memory_allocated | ||
|
||
|
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,27 @@ | ||
.. _cn_api_device_cuda_max_memory_reserved_cn: | ||
|
||
|
||
max_memory_reserved | ||
------------------------------- | ||
|
||
.. py:function:: paddle.device.cuda.max_memory_reserved(device=None) | ||
返回给定设备上由Allocator管理的显存峰值。 | ||
|
||
参数 | ||
:::::::: | ||
|
||
**device** (paddle.CUDAPlace|int|str,可选) - 设备、设备ID或形如 ``gpu:x`` 的设备名称。如果 ``device`` 为None,则 ``device`` 为当前的设备。默认值为None。 | ||
|
||
|
||
返回 | ||
:::::::: | ||
|
||
一个整数,表示给定设备上当前由Allocator管理的显存峰值,以字节为单位。 | ||
|
||
代码示例 | ||
:::::::: | ||
|
||
COPY-FROM: paddle.device.cuda.max_memory_reserved | ||
|
||
|
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,30 @@ | ||
.. _cn_api_device_cuda_memory_allocated_cn: | ||
|
||
|
||
memory_allocated | ||
------------------------------- | ||
|
||
.. py:function:: paddle.device.cuda.memory_allocated(device=None) | ||
返回给定设备上当前分配给Tensor的显存大小。 | ||
|
||
.. note:: | ||
Paddle中分配给Tensor的显存块大小会进行256字节对齐,因此可能大于Tensor实际需要的显存大小。例如,一个shape为[1]的float32类型Tensor会占用256字节的显存,即使存储一个floatt32类型数据实际只需要4字节。 | ||
|
||
参数 | ||
:::::::: | ||
|
||
**device** (paddle.CUDAPlace|int|str,可选) - 设备、设备ID或形如 ``gpu:x`` 的设备名称。如果 ``device`` 为None,则 ``device`` 为当前的设备。默认值为None。 | ||
|
||
|
||
返回 | ||
:::::::: | ||
|
||
一个整数,表示给定设备上当前分配给Tensor的显存大小,以字节为单位。 | ||
|
||
代码示例 | ||
:::::::: | ||
|
||
COPY-FROM: paddle.device.cuda.memory_allocated | ||
|
||
|
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,27 @@ | ||
.. _cn_api_device_cuda_memory_reserved_cn: | ||
|
||
|
||
memory_reserved | ||
------------------------------- | ||
|
||
.. py:function:: paddle.device.cuda.memory_reserved(device=None) | ||
返回给定设备上当前由Allocator管理的显存大小。 | ||
|
||
参数 | ||
:::::::: | ||
|
||
**device** (paddle.CUDAPlace|int|str,可选) - 设备、设备ID或形如 ``gpu:x`` 的设备名称。如果 ``device`` 为None,则 ``device`` 为当前的设备。默认值为None。 | ||
|
||
|
||
返回 | ||
:::::::: | ||
|
||
一个整数,表示给定设备上当前由Allocator管理的显存大小,以字节为单位。 | ||
|
||
代码示例 | ||
:::::::: | ||
|
||
COPY-FROM: paddle.device.cuda.memory_reserved | ||
|
||
|