@@ -34,6 +34,23 @@ class DeviceStatsMonitor(Callback):
3434 r"""Automatically monitors and logs device stats during training, validation and testing stage.
3535 ``DeviceStatsMonitor`` is a special callback as it requires a ``logger`` to passed as argument to the ``Trainer``.
3636
37+ Logged Metrics:
38+ Device statistics are logged with keys prefixed as
39+ ``DeviceStatsMonitor.{hook_name}/{base_metric_name}`` (e.g.,
40+ ``DeviceStatsMonitor.on_train_batch_start/cpu_percent``).
41+ The source of these metrics depends on the ``cpu_stats`` flag
42+ and the active accelerator.
43+
44+ CPU (via ``psutil``): Logs ``cpu_percent``, ``cpu_vm_percent``, ``cpu_swap_percent``.
45+ All are percentages (%).
46+ CUDA GPU (via :func:`torch.cuda.memory_stats`): Logs detailed memory statistics from
47+ PyTorch's allocator (e.g., ``allocated_bytes.all.current``, ``num_ooms``; all in Bytes).
48+ GPU compute utilization is not logged by default.
49+ Other Accelerators (e.g., TPU, MPS): Logs device-specific stats.
50+ - TPU example: ``avg. free memory (MB)``.
51+ - MPS example: ``mps.current_allocated_bytes``.
52+ Observe logs or check accelerator documentation for details.
53+
3754 Args:
3855 cpu_stats: if ``None``, it will log CPU stats only if the accelerator is CPU.
3956 If ``True``, it will log CPU stats regardless of the accelerator.
@@ -45,6 +62,7 @@ class DeviceStatsMonitor(Callback):
4562 ModuleNotFoundError:
4663 If ``psutil`` is not installed and CPU stats are monitored.
4764
65+
4866 Example::
4967
5068 from lightning import Trainer
0 commit comments