NNI (Neural Network Intelligence) is a toolkit to help users run automated machine learning (AutoML) experiments. The tool dispatches and runs trial jobs generated by tuning algorithms to search the best neural architecture and/or hyper-parameters in different environments like local machine, remote servers and cloud.
- Those who want to try different AutoML algorithms in their training code (model) at their local machine.
- Those who want to run AutoML trial jobs in different environments to speed up search (e.g. remote servers and cloud).
- Researchers and data scientists who want to implement their own AutoML algorithms and compare it with other algorithms.
- ML Platform owners who want to support AutoML in their platform.
Install through pip
- We only support Linux in current stage, Ubuntu 16.04 or higher are tested and supported. Simply run the following
pip install
in an environment that haspython >= 3.5
.
python3 -m pip install --user --upgrade nni
Note: If you are in docker container (as root), please remove --user
from the installation command.
Install through source code
- We only support Linux (Ubuntu 16.04 or higher) in our current stage.
- Run the following commands in an environment that has
python >= 3.5
,git
andwget
.
git clone -b v0.3.4 https://github.com/Microsoft/nni.git
cd nni
source install.sh
Verify install
The following example is an experiment built on TensorFlow. Make sure you have TensorFlow installed before running it.
- Download the examples via clone the source code.
git clone -b v0.3.4 https://github.com/Microsoft/nni.git
- Run the mnist example.
nnictl create --config nni/examples/trials/mnist/config.yml
- Wait for the message
INFO: Successfully started experiment!
in the command line. This message indicates that your experiment has been successfully started. You can explore the experiment using theWeb UI url
.
INFO: Starting restful server...
INFO: Successfully started Restful server!
INFO: Setting local config...
INFO: Successfully set local config!
INFO: Starting experiment...
INFO: Successfully started experiment!
-----------------------------------------------------------------------
The experiment id is egchD4qy
The Web UI urls are: http://223.255.255.1:8080 http://127.0.0.1:8080
-----------------------------------------------------------------------
You can use these commands to get more information about the experiment
-----------------------------------------------------------------------
commands description
1. nnictl experiment show show the information of experiments
2. nnictl trial ls list all of trial jobs
3. nnictl log stderr show stderr log content
4. nnictl log stdout show stdout log content
5. nnictl stop stop an experiment
6. nnictl trial kill kill a trial job by id
7. nnictl --help get help information about nnictl
-----------------------------------------------------------------------
- Open the
Web UI url
in your browser, you can view detail information of the experiment and all the submitted trial jobs as shown below. Here are more Web UI pages.
- Install NNI
- Use command line tool nnictl
- Use NNIBoard
- How to define search space
- How to define a trial
- Config an experiment
- How to use annotation
- Run an experiment on local (with multiple GPUs)?
- Run an experiment on multiple machines?
- Run an experiment on OpenPAI?
- Try different tuners and assessors
- Implement a customized tuner
- Implement a customized assessor
- Use Genetic Algorithm to find good model architectures for Reading Comprehension task
This project welcomes contributions and suggestions, we use GitHub issues for tracking requests and bugs.
Issues with the good first issue label are simple and easy-to-start ones that we recommend new contributors to start with.
To set up environment for NNI development, refer to the instruction: Set up NNI developer environment
Before start coding, review and get familiar with the NNI Code Contribution Guideline: Contributing
We are in construction of the instruction for How to Debug, you are also welcome to contribute questions or suggestions on this area.
The entire codebase is under MIT license