Skip to content
This repository has been archived by the owner on Sep 18, 2024. It is now read-only.

update doc: overview #555

Merged
merged 9 commits into from
Jan 21, 2019
Merged
Changes from 2 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
63 changes: 34 additions & 29 deletions docs/Overview.md
Original file line number Diff line number Diff line change
@@ -1,49 +1,54 @@
# NNI Overview

NNI (Neural Network Intelligence) is a toolkit to help users run automated machine learning experiments. For each experiment, user only need to define a search space and update a few lines of code, and then leverage NNI build-in algorithms and training services to search the best hyper parameters and/or neural architecture.
NNI (Neural Network Intelligence) is a toolkit to help users design and tune machine learning models (e.g., hyperparameters), neural network architectures, or complex system's parameters, in an efficient and automatic way. NNI has several appealing properties: easy-to-use, scalability, flexibility, extensibility, efficiency.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These word scalability, flexibility and extensibility may express the similar meaning, so I suggest just to keep the word scalability.

- NNI has several appealing properties: easy-to-use, scalability, flexibility, extensibility, efficiency.
+ NNI has several appealing features: easy-to-use, scalability and efficiency.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

reasonable, i will remove extensibility.


>Step 1: [Define search space](SearchSpaceSpec.md)
* **Easy-to-use**: NNI can be easily installed through python pip. There are only several lines to added to your code in order to use NNI's power to tune your model. And you can use both a commandline tool and WebUI to view and control your experiments.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are only several lines to be added to your code ?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As the value proposition for NNI had extended to more than just model tuning (in the 1st section), you might also not want to only emphasis NNI's power with "tune your model".

both "a", remove the "a". "view and control" -> "work with".

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@SparkSnail fixed, thanks.
@scarlett2018 you are right, how about directly remove 'to tune your model'?

* **Scalability**: Tuning hyperparameters or neural architecture often demands large amount of computation resource, while NNI is designed to fully leverage different computation resources, such as remote machines, training platforms (e.g., PAI, Kubernetes). NNI could run thousands of parallel trials depends on the capacity of your configured training platforms.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe "NNI could run thousands of parallel trials by depending on the capacity of ..."

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed, thanks.

* **Flexibility**: Besides rich built-in algorithms, NNI allows users to customize various hyperparameter tuning algorithms, neural architecture search algorithms, early stopping algorithms, etc.
* **Extensibility**: Users could extend NNI with more training platforms, running trials on those computation resources. For example, connecting NNI with virtual machines, kubernetes service, etc. on the cloud, such as Azure, AWS, Aliyun. Users could also connect NNI to external environments to tune the applications/models on them.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For Easy-to-use, I think we may improve the convenience of NNI later because it seems that the specific python version and node version is required. Shall NNI provide conda package later (my suggestion)?
For Scalability, Flexibility, and Extensibility, it seems that Scalability and Extensibility sections convey the platform scalability and Flexibility section conveys the algorithm scalability.
One more thing, we can mention that the docker environment is provided which is an appealing feature.

* **Efficiency**: We are intensively working on more efficient model tuning from both system level and algorithm level. For example, leveraging early feedback to speedup tuning procedure.

>Step 2: [Update model codes](howto_1_WriteTrial.md)
## Key Concepts

>Step 3: [Define Experiment](ExperimentConfig.md)
* *Experiment*: An experiment is one task of, for example, finding out the best hyperparameters of a model, finding out the best neural network architecture. It consists of trials and AutoML algorithms.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why use italics instead of bold, I think bolder is more conspicuous? (just a suggestion)


* *Search Space*: It means the feasible region for tuning the model. For example, the value range of each hyperparameters.

<p align="center">
<img src="./img/3_steps.jpg" alt="drawing"/>
</p>
* *Configuration*: A configuration is an instance from the search space, that is, each hyperparameter has a specific value.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think config.yml is a configuration?


After user submits the experiment through a command line tool [nnictl](../tools/README.md), a demon process (NNI manager) take care of search process. NNI manager continuously get search settings that generated by tuning algorithms, then NNI manager asks the training service component to dispatch and run trial jobs in a targeted training environment (e.g. local machine, remote servers and cloud). The results of trials jobs such as model accurate will send back to tuning algorithms for generating more meaningful search settings. NNI manager stops the search process after it find the best models.
* *Trial*: Trial is an individual attempt at applying a new configuration, for example, a set of hyperparameter values on a model, or a specific nerual architecture.

## Architecture Overview
<p align="center">
<img src="./img/nni_arch_overview.png" alt="drawing"/>
</p>
* *Tuner*: Tuner is an AutoML algorithm, which generates a new configuration for the next try. A new trial runs with this configuration.

User can use the nnictl and/or a visualized Web UI nniboard to monitor and debug a given experiment.
* *Assessor*: Assessor analyzes trial's intermediate results (e.g., periodically evaluated accuracy on test dataset) to tell whether this trial can be early stopped or not.

NNI provides a set of examples in the package to get you familiar with the above process. In the following example [/examples/trials/mnist], we had already set up the configuration and updated the training codes for you. You can directly run the following command to start an experiment.
Basically, an experiment runs as follows: Tuner receives search space and generates configurations. These configurations will be submitted to training platforms, such as local machine, remote machines, or training clusters. Their performances are reported back to Tuner. Then, new configurations are generated and submitted.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Consider adding a training platform explanation to the key concepts instead of the “such as” example?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Quick question about "SearchSpace", do we want to use them as one term or 2 words?

Agree with PurityFan's on Training Platform's definition. Also, I was thinking Configuration also includes the yalm file's definition for training platform.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @PurityFan and @scarlett2018 , will add training platform. Let's keep 'Search Space' for now, then discuss it when the whole doc is almost ready.
Here 'Configuration' does not mean experiment's config file. It means 'It means the feasible region for tuning the model. For example, the value range of each hyperparameters.'. @PurityFan is this confusing?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can understand what you mean. Need to distinguish between config.yml and configuration.


## Key Concepts
For each experiment, user only needs to define a search space and update a few lines of code, and then leverage NNI built-in Tuner/Assessor and training platforms to search the best hyper parameters and/or neural architecture. There are basically 3 steps:

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

hyperparameters

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed, thanks.

**Experiment** in NNI is a method for testing different assumptions (hypotheses) by Trials under conditions constructed and controlled by NNI. During the experiment, one or more conditions are allowed to change in an organized manner and effects of these changes on associated conditions.
>Step 1: [Define search space](SearchSpaceSpec.md)

>Step 2: [Update model codes](howto_1_WriteTrial.md)

### **Trial**
**Trial** in NNI is an individual attempt at applying a set of parameters on a model.
>Step 3: [Define Experiment](ExperimentConfig.md)

### **Tuner**
**Tuner** in NNI is an implementation of Tuner API for a special tuning algorithm. [Read more about the Tuners supported in the latest NNI release](HowToChooseTuner.md)

### **Assessor**
**Assessor** in NNI is an implementation of Assessor API for optimizing the execution of experiment.
<p align="center">
<img src="./img/3_steps.jpg" alt="drawing"/>
</p>

More details about how to run an experiment, please refer to [Get Started]().
QuanluZhang marked this conversation as resolved.
Show resolved Hide resolved

## Learn More
* [Get started](GetStarted.md)
* [Install NNI](Installation.md)
* [Use command line tool nnictl](NNICTLDOC.md)
* [Use NNIBoard](WebUI.md)
* [Use annotation](howto_1_WriteTrial.md#nni-python-annotation)
### **Tutorials**
* [How to run an experiment on local (with multiple GPUs)?](tutorial_1_CR_exp_local_api.md)
* [How to adapt your trial code on NNI?]()
* [What are tuners supported by NNI?]()
* [How to customize your own tuner?]()
* [What are assessors supported by NNI?]()
* [How to customize your own assessor?]()
* [How to run an experiment on local?](tutorial_1_CR_exp_local_api.md)
* [How to run an experiment on multiple machines?](tutorial_2_RemoteMachineMode.md)
* [How to run an experiment on OpenPAI?](PAIMode.md)
* [How to run an experiment on OpenPAI?](PAIMode.md)
* [How to do trouble shooting when using NNI?]()
* [Examples]()
* [Reference]()
QuanluZhang marked this conversation as resolved.
Show resolved Hide resolved