You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
- Provides capabilities to run training using the DeepSpeed library, with training optimizations for large billion parameter models. :doc:`Learn more. <../advanced/model_parallel/deepspeed>`
81
-
* - hpu_parallel
82
-
- ``HPUParallelStrategy``
83
-
- Strategy for distributed training on multiple HPU devices. :doc:`Learn more. <../integrations/hpu/index>`
84
-
* - hpu_single
85
-
- ``SingleHPUStrategy``
86
-
- Strategy for training on a single HPU device. :doc:`Learn more. <../integrations/hpu/index>`
- Strategy for training on multiple TPU devices using the :func:`torch_xla.distributed.xla_multiprocessing.spawn` method. :doc:`Learn more. <../accelerators/tpu>`
0 commit comments