Skip to content

SX-Aurora/veda-pytorch

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

17 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

VEDA PyTorch

VEDA PyTorch is a library to add device support for the NEC SX-Aurora TSUBASA into PyTorch.

Github PyPI License Python Versions Maintenance Maintenance

Release Notes

VersionComment
v14.0.1
  • Increased c10d compatibility (tested on v2.5-2.9)
  • Added better error handling when running on systems with missing VEOS installation.
v14 Starting with v14, VEDA PyTorch is no longer distributed as precompiled binary but gets compiled as PyTorch C++ extension on the target machine. So you don't need to install a matching binary package anymore!

We further added a experimental implementation for using NEC MPI. You can create the process group as follows:

torch.distributed.init_process_group( backend = 'veda', world_size = os.environ['MPISIZE'], rank = os.environ['MPIRANK'], store = torch.distributed.Store() )

Further changes:

  • Added support for PyTorch v2.9.0
  • Added arange.start_out
  • Added function tracing. Activate using TUNGL_LOG=TRACE
  • Bugfix for aten::cat.out
  • Bugfix for copy_
  • Bugfix for torch.load(location='ve')
  • Removed unnecessary context sync
v13
  • Fixed torch.ve.set_device
  • Fixed allocation on wrong VE in multi-process execution
  • Improved error messages
  • Upgraded build script for PyTorch >=2.7!
v12
  • Added auto plugin loading for Pytorch. import veda.pytorch is no longer required with PyTorch >=2.5!
v11
  • Fixed shutdown problem in mixed GPU/VE use cases.
v10
  • Support for PyTorch v2.3.1
  • Support for SX-Aurora VE3
v9
  • Support for PyTorch v2.3.0
v8
  • Added torch.logical_not
v7
  • Support for PyTorch v2.0.0
  • Support for PyTorch v1.13.0
  • Added torch.log1p
v6
  • Support for PyTorch v1.12.0 and v1.12.1
v5
  • Added
    • torch.clamp
    • torch.clamp_max
    • torch.clamp_min
    • torch.exp
    • torch.log
    • torch.norm
    • torch.pow
    • torch.where
  • Fixed conversion from numeric value to bool
  • Fixed calling torch.ve.memory_allocated() without device id
  • Preventing 0-byte allocations from PyTorch to be passed on to VEDA
v4
  • fixed possible segfault in Tensor resize if no storage is initialized
  • fixed dtype handling in Scalar to Tensor operations
v3
  • added squeeze and unsqueeze handlers
v2
  • Minor changes to enable PyTorch v1.11.0
  • Fixed vedaInit error checking to ignore if already initialized
v1 Initial Release

About

VEDA Pytorch Integration

Resources

License

Stars

Watchers

Forks

Packages

No packages published