Skip to content
This repository has been archived by the owner on Sep 18, 2024. It is now read-only.

update doc: overview #555

Merged
merged 9 commits into from
Jan 21, 2019
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
70 changes: 41 additions & 29 deletions docs/Overview.md
Original file line number Diff line number Diff line change
@@ -1,49 +1,61 @@
# NNI Overview

NNI (Neural Network Intelligence) is a toolkit to help users run automated machine learning experiments. For each experiment, user only need to define a search space and update a few lines of code, and then leverage NNI build-in algorithms and training services to search the best hyper parameters and/or neural architecture.
NNI (Neural Network Intelligence) is a toolkit to help users design and tune machine learning models (e.g., hyperparameters), neural network architectures, or complex system's parameters, in an efficient and automatic way. NNI has several appealing properties: easy-to-use, scalability, flexibility, and efficiency.
xuehui1991 marked this conversation as resolved.
Show resolved Hide resolved

>Step 1: [Define search space](SearchSpaceSpec.md)
* **Easy-to-use**: NNI can be easily installed through python pip. Only several lines need to be added to your code in order to use NNI's power. You can use both commandline tool and WebUI to work with your experiments.
* **Scalability**: Tuning hyperparameters or neural architecture often demands large amount of computation resource, while NNI is designed to fully leverage different computation resources, such as remote machines, training platforms (e.g., PAI, Kubernetes). Thousands of trials could run in parallel by depending on the capacity of your configured training platforms.
* **Flexibility**: Besides rich built-in algorithms, NNI allows users to customize various hyperparameter tuning algorithms, neural architecture search algorithms, early stopping algorithms, etc. Users could also extend NNI with more training platforms, such as virtual machines, kubernetes service on the cloud. Moreover, NNI can connect to external environments to tune special applications/models on them.
* **Efficiency**: We are intensively working on more efficient model tuning from both system level and algorithm level. For example, leveraging early feedback to speedup tuning procedure.

>Step 2: [Update model codes](howto_1_WriteTrial.md)
The figure below shows high-level architecture of NNI.

>Step 3: [Define Experiment](ExperimentConfig.md)
<p align="center">
<img src="./img/highlevelarchi.png" alt="drawing" width="700"/>
</p>

## Key Concepts

* *Experiment*: An experiment is one task of, for example, finding out the best hyperparameters of a model, finding out the best neural network architecture. It consists of trials and AutoML algorithms.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why use italics instead of bold, I think bolder is more conspicuous? (just a suggestion)


<p align="center">
<img src="./img/3_steps.jpg" alt="drawing"/>
</p>
* *Search Space*: It means the feasible region for tuning the model. For example, the value range of each hyperparameters.

After user submits the experiment through a command line tool [nnictl](../tools/README.md), a demon process (NNI manager) take care of search process. NNI manager continuously get search settings that generated by tuning algorithms, then NNI manager asks the training service component to dispatch and run trial jobs in a targeted training environment (e.g. local machine, remote servers and cloud). The results of trials jobs such as model accurate will send back to tuning algorithms for generating more meaningful search settings. NNI manager stops the search process after it find the best models.
* *Configuration*: A configuration is an instance from the search space, that is, each hyperparameter has a specific value.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think config.yml is a configuration?


## Architecture Overview
<p align="center">
<img src="./img/nni_arch_overview.png" alt="drawing"/>
</p>
* *Trial*: Trial is an individual attempt at applying a new configuration (e.g., a set of hyperparameter values, a specific nerual architecture). Trial code should be able to run with the provided configuration.

User can use the nnictl and/or a visualized Web UI nniboard to monitor and debug a given experiment.
* *Tuner*: Tuner is an AutoML algorithm, which generates a new configuration for the next try. A new trial will run with this configuration.

NNI provides a set of examples in the package to get you familiar with the above process. In the following example [/examples/trials/mnist], we had already set up the configuration and updated the training codes for you. You can directly run the following command to start an experiment.
* *Assessor*: Assessor analyzes trial's intermediate results (e.g., periodically evaluated accuracy on test dataset) to tell whether this trial can be early stopped or not.

## Key Concepts
* *Training Platform*: It means where trials are executed. Depending on your experiment's configuration, it could be your local machine, or remote servers, or large-scale training platform (e.g., PAI, Kubernetes).

**Experiment** in NNI is a method for testing different assumptions (hypotheses) by Trials under conditions constructed and controlled by NNI. During the experiment, one or more conditions are allowed to change in an organized manner and effects of these changes on associated conditions.
Basically, an experiment runs as follows: Tuner receives search space and generates configurations. These configurations will be submitted to training platforms, such as local machine, remote machines, or training clusters. Their performances are reported back to Tuner. Then, new configurations are generated and submitted.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Consider adding a training platform explanation to the key concepts instead of the “such as” example?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Quick question about "SearchSpace", do we want to use them as one term or 2 words?

Agree with PurityFan's on Training Platform's definition. Also, I was thinking Configuration also includes the yalm file's definition for training platform.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @PurityFan and @scarlett2018 , will add training platform. Let's keep 'Search Space' for now, then discuss it when the whole doc is almost ready.
Here 'Configuration' does not mean experiment's config file. It means 'It means the feasible region for tuning the model. For example, the value range of each hyperparameters.'. @PurityFan is this confusing?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can understand what you mean. Need to distinguish between config.yml and configuration.


### **Trial**
**Trial** in NNI is an individual attempt at applying a set of parameters on a model.
For each experiment, user only needs to define a search space and update a few lines of code, and then leverage NNI built-in Tuner/Assessor and training platforms to search the best hyperparameters and/or neural architecture. There are basically 3 steps:

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

hyperparameters

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed, thanks.

### **Tuner**
**Tuner** in NNI is an implementation of Tuner API for a special tuning algorithm. [Read more about the Tuners supported in the latest NNI release](HowToChooseTuner.md)
>Step 1: [Define search space](SearchSpaceSpec.md)

>Step 2: [Update model codes](howto_1_WriteTrial.md)

>Step 3: [Define Experiment](ExperimentConfig.md)


<p align="center">
<img src="./img/3_steps.jpg" alt="drawing"/>
</p>

### **Assessor**
**Assessor** in NNI is an implementation of Assessor API for optimizing the execution of experiment.
More details about how to run an experiment, please refer to [Get Started]().
QuanluZhang marked this conversation as resolved.
Show resolved Hide resolved

## Learn More
* [Get started](GetStarted.md)
* [Install NNI](Installation.md)
* [Use command line tool nnictl](NNICTLDOC.md)
* [Use NNIBoard](WebUI.md)
* [Use annotation](howto_1_WriteTrial.md#nni-python-annotation)
### **Tutorials**
* [How to run an experiment on local (with multiple GPUs)?](tutorial_1_CR_exp_local_api.md)
* [How to adapt your trial code on NNI?]()
* [What are tuners supported by NNI?]()
* [How to customize your own tuner?]()
* [What are assessors supported by NNI?]()
* [How to customize your own assessor?]()
* [How to run an experiment on local?](tutorial_1_CR_exp_local_api.md)
* [How to run an experiment on multiple machines?](tutorial_2_RemoteMachineMode.md)
* [How to run an experiment on OpenPAI?](PAIMode.md)
* [How to run an experiment on OpenPAI?](PAIMode.md)
* [How to do trouble shooting when using NNI?]()
* [Examples]()
* [Reference]()
QuanluZhang marked this conversation as resolved.
Show resolved Hide resolved
Binary file added docs/img/highlevelarchi.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
25 changes: 0 additions & 25 deletions mkdocs.yml

This file was deleted.