Releases: FluxML/NNlib.jl
Releases · FluxML/NNlib.jl
v0.8.15
v0.8.14
NNlib v0.8.14
Closed issues:
- support arbitrary number of batch dimensions in batched_mul (#451)
Merged pull requests:
v0.8.13
v0.8.12
v0.8.11
NNlib v0.8.11
Closed issues:
- (Flaky?) CI failures on GHA latest + Buildkite (#359)
Merged pull requests:
- Trigger tagbot on issue comments (#440) (@Saransh-cpp)
- Remove threading from all
∇*conv_filter
and re-enable old tests (#441) (@ToucheSir) - Slightly faster softplus (#443) (@Sleort)
- Add fold and unfold (#444) (@nikopj)
v0.8.10
NNlib v0.8.10
Closed issues:
- Incorrect gradient of convolution w.r.t. weights (#197)
- Create independent documentation for
NNlib.jl
? (#430) - Nested AD failure with
logσ
after JuliaDiff/ChainRules.jl#644 (#432)
Merged pull requests:
- Add minimal infrastructure for the docs (#431) (@Saransh-cpp)
- Widen activation broadcast rules (#433) (@mcabbott)
- Add basic benchmark harness (#436) (@ToucheSir)
- Remove negative margin in docs CSS (#437) (@ToucheSir)
- Create root level index.html (#438) (@Saransh-cpp)
- Update readme (#439) (@mcabbott)
v0.8.9
NNlib v0.8.9
Merged pull requests:
- make BatchedAdjOrTrans return correct BroadcastStyle (#424) (@chengchingwen)
- Move
ctc_loss
from Flux to NNlib (#426) (@mcabbott)
v0.8.8
NNlib v0.8.8
Closed issues:
- Activation functions have to be broadcasted by the user to act on arrays (#422)
Merged pull requests:
- support complex input for upsample (#421) (@mloubout)
- Define activation functions taking arrays as input (#423) (@theabhirath)
v0.8.7
NNlib v0.8.7
Closed issues:
- v0.8.6 contains breaking changes (#412)
Merged pull requests:
- partly revert changes to stay compatible w NNlibCUDA (#414) (@maxfreu)
- Update CompatHelper.yml (#419) (@CarloLucibello)
- CompatHelper: bump compat for Compat to 4, (keep existing compat) (#420) (@github-actions[bot])
v0.8.6
NNlib v0.8.6
Merged pull requests: