This repository has been archived by the owner on Sep 18, 2024. It is now read-only.
NNI v1.5 Release
New Features and Documentation
Hyper-Parameter Optimizing
- New tuner: Population Based Training (PBT)
- Trials can now report infinity and NaN as result
Neural Architecture Search
- New NAS algorithm: TextNAS
- ENAS and DARTS now support visualization through web UI.
Model Compression
- New Pruner: GradientRankFilterPruner
- Compressors will validate configuration by default
- Refactor: Adding optimizer as an input argument of pruner, for easy support of DataParallel and more efficient iterative pruning. This is a broken change for the usage of iterative pruning algorithms.
- Model compression examples are refactored and improved
- Added documentation for implementing compressing algorithm
Training Service
- Kubeflow now supports pytorchjob crd v1 (thanks external contributor @jiapinai)
- Experimental DLTS support
Overall Documentation Improvement
- Documentation is significantly improved on grammar, spelling, and wording (thanks external contributor @AHartNtkn)
Fixed Bugs
- ENAS cannot have more than one LSTM layers (thanks external contributor @marsggbo)
- NNI manager's timers will never unsubscribe (thanks external contributor @guilhermehn)
- NNI manager may exhaust head memory (thanks external contributor @Sundrops)
- Batch tuner does not support customized trials (#2075)
- Experiment cannot be killed if it failed on start (#2080)
- Non-number type metrics break web UI (#2278)
- A bug in lottery ticket pruner
- Other minor glitches