zigTensor is a fast, flexible machine learning library written entirely in Zig; the design is heavily inspired by the Flashlight library.
Requirement | Notes |
---|---|
Zig version | main |
ArrayFire | latest via homebrew |
OS | OSx & Linux (verified on Ubuntu) |
Install with zigup:
zigup master
Install ArrayFire with homebrew:
brew install arrayfire
See Github Action Workflow for example setup on Ubuntu.
The project is in the incredibly early stages of development and only the core Tensor/Tensor Ops functionality is currently functional. Autograd is a WIP.
Currently, development is solely focused on OSx/Linux and leveraging ArrayFire for the backend.
const zt = @import("zigTensor");
const allocator = std.heap.c_allocator; // or your preferred allocator
const DType = zt.tensor.DType;
const deinit = zt.tensor.deinit;
const Tensor = zt.tensor.Tensor;
defer deinit(); // deinit global singletons (e.g. ArrayFire Backend/DeviceManager)
var a = try zt.tensor.rand(allocator, &.{5, 5}, DType.f32);
defer a.deinit();
var b = try zt.tensor.rand(allocator, &.{5, 5}, DType.f32);
defer b.deinit();
var c = try zt.tensor.add(allocator, Tensor, a, Tensor, b); // operator overloading pending (likely utilize comath library)
defer c.deinit();
- bindings
- ArrayFire // TODO: optimize
- oneDNN (WIP)
- autograd (WIP)
- Functions // TODO: debug oneDNN (TensorExtension) fns
- Utils
- Variable // TODO: optimize
- common (WIP)
- contrib
- dataset
- distributed
- meter
- nn
- optim
- runtime (WIP)
- CUDADevice
- CUDAStream
- CUDAUtils
- Device
- DeviceManager
- DeviceType
- Stream
- SynchronousStream
- tensor
- backend
- ArrayFire (current focus - WIP)
- mem
- CachingMemoryManager
- DefaultMemoryManager
- MemoryManagerAdapter
- MemoryManagerAdapterDeviceInterface
- MemoryManagerInstaller
- AdvancedIndex (CUDA specific API)
- ArrayFireBLAS
- ArrayFireBackend
- ArrayFireBinaryOps
- ArrayFireCPUStream
- ArrayFireReductions
- ArrayFireShapeAndIndex
- ArrayFireTensor
- ArrayFireUnaryOps
- Utils
- mem
- JIT
- oneDNN
- Stub
- ArrayFire (current focus - WIP)
- CUDAProfile
- Compute
- DefaultTensorType
- Index
- Init
- Profile
- Random
- Shape
- TensorAdapter
- TensorBackend
- TensorBase
- TensorExtension
- Types
- backend
- ArrayFire (current focus - WIP)
- CPU
- OpenCL
- CUDA
- JIT
- oneDNN (WIP - first implementing for use with Autograd ops)
- Stub
- Language APIs
- C compatible API
- JS Bindings
- FFI
- NAPI
- WASM