Full documentation for RCCL is available at https://rccl.readthedocs.io
- MSCCL++ integration for specific contexts
- Performance collection to rccl_replayer
- Tuner Plugin example for MI300
- Tuning table for large number of nodes
- Support for amdclang++
- New Rome model
- Compatibility with NCCL 2.21.5
- Increased channel count for MI300X multi-node
- Enabled MSCCL for single-process multi-threaded contexts
- Enabled gfx12
- Enabled CPX mode for MI300X
- Enabled tracing with rocprof
- Improved version reporting
- Enabled GDRDMA for Linux kernel 6.4.0+
- Fixed model matching with PXN enable
- GDR support flag now set with DMABUF
- On systems running Linux kernel 6.8.0, such as Ubuntu 24.04, Direct Memory Access (DMA) transfers between the GPU and NIC are disabled and impacts multi-node RCCL performance.
- This issue was reproduced with RCCL 2.20.5 (ROCm 6.2.0 and 6.2.1) on systems with Broadcom Thor-2 NICs and affects other systems with RoCE networks using Linux 6.8.0 or newer.
- Older RCCL versions are also impacted.
- This issue will be addressed in a future ROCm release.
- Compatibility with NCCL 2.20.5
- Compatibility with NCCL 2.19.4
- Performance tuning for some collective operations on MI300
- Enabled NVTX code in RCCL
- Replaced rccl_bfloat16 with hip_bfloat16
- NPKit updates:
- Removed warm-up iteration removal by default, need to opt in now
- Doubled the size of buffers to accommodate for more channels
- Modified rings to be rail-optimized topology friendly
- Replaced ROCmSoftwarePlatform links with ROCm links
- Support for fp8 and rccl_bfloat8
- Support for using HIP contiguous memory
- Implemented ROC-TX for host-side profiling
- Enabled static build
- Added new rome model
- Added fp16 and fp8 cases to unit tests
- New unit test for main kernel stack size
- New -n option for topo_expl to override # of nodes
- Improved debug messages of memory allocations
- Bug when configuring RCCL for only LL128 protocol
- Scratch memory allocation after API change for MSCCL
- Compatibility with NCCL 2.18.6
- Compatibility with NCCL 2.18.3
- Compatibility with NCCL 2.17.1-1
- Performance tuning for some collective operations
- Minor improvements to MSCCL codepath
- NCCL_NCHANNELS_PER_PEER support
- Improved compilation performance
- Support for gfx94x
- Potential race-condition during ncclSocketClose()
- Compatibility with NCCL 2.16.2
- Remove workaround and use indirect function call
- Compatibility with NCCL 2.15.5
- Unit test executable renamed to rccl-UnitTests
- HW-topology aware binary tree implementation
- Experimental support for MSCCL
- New unit tests for hipGraph support
- NPKit integration
- rocm-smi ID conversion
- Support for HIP_VISIBLE_DEVICES for unit tests
- Support for p2p transfers to non (HIP) visible devices
- Removed TransferBench from tools. Exists in standalone repo: https://github.com/ROCm/TransferBench
- Compatibility with NCCL 2.13.4
- Improvements to RCCL when running with hipGraphs
- RCCL_ENABLE_HIPGRAPH environment variable is no longer necessary to enable hipGraph support
- Minor latency improvements
- Resolved potential memory access error due to asynchronous memset
- Improvements to LL128 algorithms
- Adding initial hipGraph support via opt-in environment variable RCCL_ENABLE_HIPGRAPH
- Integrating with NPKit (https://github.com/microsoft/NPKit) profiling code
- Compatibility with NCCL 2.12.10
- Packages for test and benchmark executables on all supported OSes using CPack.
- Adding custom signal handler - opt-in with RCCL_ENABLE_SIGNALHANDLER=1
- Additional details provided if Binary File Descriptor library (BFD) is pre-installed
- Adding support for reusing ports in NET/IB channels
- Opt-in with NCCL_IB_SOCK_CLIENT_PORT_REUSE=1 and NCCL_IB_SOCK_SERVER_PORT_REUSE=1
- When "Call to bind failed : Address already in use" error happens in large-scale AlltoAll (e.g., >=64 MI200 nodes), users are suggested to opt-in either one or both of the options to resolve the massive port usage issue
- Avoid using NCCL_IB_SOCK_SERVER_PORT_REUSE when NCCL_NCHANNELS_PER_NET_PEER is tuned >1
- Removed experimental clique-based kernels
- Unit testing framework rework
- Minor bug fixes
- Managed memory is not currently supported for clique-based kernels
- Compatibility with NCCL 2.11.4
- Managed memory is not currently supported for clique-based kernels
- Compatibility with NCCL 2.10.3
- Managed memory is not currently supported for clique-based kernels
- Packaging split into a runtime package called rccl and a development package called rccl-devel. The development package depends on runtime. The runtime package suggests the development package for all supported OSes except CentOS 7 to aid in the transition. The suggests feature in packaging is introduced as a deprecated feature and will be removed in a future rocm release.
- Compatibility with NCCL 2.9.9
- Managed memory is not currently supported for clique-based kernels
- Ability to select the number of channels to use for clique-based all reduce (RCCL_CLIQUE_ALLREDUCE_NCHANNELS). This can be adjusted to tune for performance when computation kernels are being executed in parallel.
- Additional tuning for clique-based kernel AllReduce performance (still requires opt in with RCCL_ENABLE_CLIQUE=1)
- Modification of default values for number of channels / byte limits for clique-based all reduce based on device architecture
- Replaced RCCL_FORCE_ENABLE_CLIQUE to RCCL_CLIQUE_IGNORE_TOPO
- Clique-based kernels can now be enabled on topologies where all active GPUs are XGMI-connected
- Topologies not normally supported by clique-based kernels require RCCL_CLIQUE_IGNORE_TOPO=1
- Install script '-r' flag invoked alone no longer incorrectly deletes any existing builds.
- Managed memory is not currently supported for clique-based kernels
- Compatibility with NCCL 2.8.4
- Additional tuning for clique-based kernels
- Enabling GPU direct RDMA read from GPU
- Fixing potential memory leak issue when re-creating multiple communicators within same process
- Improved topology detection
- None
- Experimental support for clique-based kernels (opt in with RCCL_ENABLE_CLIQUE=1)
- Clique-based kernels may offer better performance for smaller input sizes
- Clique-based kernels are currently only enabled for AllReduce under a certain byte limit (controlled via RCCL_CLIQUE_ALLREDUCE_BYTE_LIMIT)
- Performance improvements for Rome-based systems
- Clique-based kernels are currently experimental and have not been fully tested on all topologies. By default, clique-based kernels are disabled if the detected topology is not supported (override with RCCL_FORCE_ENABLE_CLIQUE)
- Clique-based kernels may hang if there are differences between environment variables set across ranks.
- Clique-based kernels may fail if the input / output device pointers are not the base device pointers returned by hipMalloc.
- Adding support for alltoallv RCCL kernel
- Modifications to topology based on XGMI links
- None
- Support for static library builds
- None
- Updated to RCCL API version of 2.7.6
- Added gather, scatter and all-to-all collectives
- Updated to RCCL API version of 2.6.4
- Compatibility with NCCL 2.6
- Network interface improvements with API v3
- Fixing issues and built time improvements for hip-clang
- Network topology detection
- Improved CPU type detection
- Infiniband adaptive routing support
- Switched to hip-clang as default compiler
- Deprecated hcc build