allumette is a toy tensor library built for fun to better understand autodifferentiation.
It is inspired by a small cohort of projects:
Dataset provides a few ways to create synthetic datasets.
use allumette::{
backend::backend_type::{Par, Seq},
data::cpu_tensor_data::CpuTensorData,
training::{dataset::Dataset, train},
};
let pts = 10;
let dataset = Dataset::simple(pts);
let hidden_layer_size = 3;
let learning_rate = 0.5;
let iterations = 200;
// use Par instead of Seq to leverage rayon's parallel iterators
train::train::<Seq, CpuTensorData>(dataset, learning_rate, iterations, hidden_layer_size);Part of the codebase makes use of the generic_const_exprs and trait_alias experimental features
so it requires nightly.
The set of dependencies is otherwise pretty limited:
wgpufor the GPU runtimerayonfor the parallel CPU runtimeflumeandfuturesfor wgpu callbacksbytemuckto convert binary buffers copied to/from the GPUproptestfor property-based testingrandfor synthetic data generation
- parallel backend
- gpu backend
- visualization
- convolution
- ergonomics
- optimizations
- tensor dimension as const generic
Seems like proptest distributions are truly uniform unlike quickcheck or scalacheck which do
hotspot values.
relu'(x) is undefined when x = 0 and by convention I had chosen 0. The central diff however
reports nonsensical values.
The bug was there for months until I ported the same logic to GPU where I hit on 0 by chance.
C.f. proptest-rs/proptest#82
GPU is fast except going to and from the CPU which happens a lot with prop tests