Skip to content
This repository has been archived by the owner on Sep 18, 2024. It is now read-only.

doc refactor to master #648

Merged
merged 31 commits into from
Jan 28, 2019
Merged
Show file tree
Hide file tree
Changes from 11 commits
Commits
Show all changes
31 commits
Select commit Hold shift + click to select a range
ec7fdb6
initial commit for document refactor (#533)
leckie-chn Dec 28, 2018
1b4684f
mnist examples doc (#566)
QuanluZhang Jan 7, 2019
c008d0c
Docs refactor of Tutorial (QuickStart, Tuners, Assessors) (#554)
PurityFan Jan 15, 2019
585d076
Dev doc: Add docs for Trials, SearchSpace, Annotation and GridSearch …
Crysple Jan 16, 2019
cac2b83
Chec dev doc (#606)
chicm-ms Jan 17, 2019
b74dffa
update dev-doc to sphinx (#630)
leckie-chn Jan 21, 2019
e7e87c5
update doc: overview (#555)
QuanluZhang Jan 21, 2019
1509c83
cifar10 example doc (#573)
PurityFan Jan 21, 2019
74e9031
Update doc: refactor ExperimentConfig.md (#602)
SparkSnail Jan 21, 2019
4516e30
update doc index & add api reference (#636)
leckie-chn Jan 21, 2019
a6f3b03
Add sklearn_example.md (#647)
SparkSnail Jan 23, 2019
d6bbb79
Merge remote-tracking branch 'upstream/master' into dev-doc-conflict
leckie-chn Jan 24, 2019
5dbeb37
revert nnictl before sphinx try
leckie-chn Jan 24, 2019
1ecf76e
fix mnist.py example
leckie-chn Jan 24, 2019
1bd16f4
add SQuAD_evolution_examples.md (#620)
xuehui1991 Jan 24, 2019
72f6467
Add GBDT example doc (#654)
xuehui1991 Jan 24, 2019
bf6fdbc
Merge remote-tracking branch 'upstream/master' into dev-doc-fix1
leckie-chn Jan 24, 2019
1b6874b
Update SearchSpaceSpec (#656)
Crysple Jan 25, 2019
c3a1413
fix color for zejun
leckie-chn Jan 25, 2019
b6321cf
fix mnist before
leckie-chn Jan 25, 2019
7163384
fix image
leckie-chn Jan 25, 2019
584080d
Fix doc format (#658)
xuehui1991 Jan 25, 2019
fbd0c3f
update index
leckie-chn Jan 25, 2019
a6d04b4
fix typo
leckie-chn Jan 25, 2019
95245d2
fix broken-links of quickstart/tuners/assessors (#662)
PurityFan Jan 26, 2019
bd28fa8
Dev doc fix4 (#672)
leckie-chn Jan 28, 2019
8ce13e9
Dev doc (#669)
xumeng723 Jan 28, 2019
ce9cb23
update doc (#670)
QuanluZhang Jan 28, 2019
81a32af
Update customized assessor doc (#671)
chicm-ms Jan 28, 2019
8e97ee2
fix typo
leckie-chn Jan 28, 2019
6919976
update doc (#673)
QuanluZhang Jan 28, 2019
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 0 additions & 1 deletion _config.yml

This file was deleted.

3 changes: 3 additions & 0 deletions docs/.gitignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
_build
_static
_templates
80 changes: 46 additions & 34 deletions docs/AnnotationSpec.md
Original file line number Diff line number Diff line change
@@ -1,58 +1,70 @@
# NNI Annotation

For good user experience and reduce user effort, we need to design a good annotation grammar.

If users use NNI system, they only need to:
## Overview

1. Use nni.get_next_parameter() to retrieve hyper parameters from Tuner, before using other annotation, use following annotation at the begining of trial code:
'''@nni.get_next_parameter()'''
To improve user experience and reduce user effort, we design an annotation grammar. Using NNI annotation, users can adapt their code to NNI just by adding some standalone annotating strings, which does not affect the execution of the original code.
xuehui1991 marked this conversation as resolved.
Show resolved Hide resolved

2. Annotation variable in code as:
Below is an example:

'''@nni.variable(nni.choice(2,3,5,7),name=self.conv_size)'''
```python
'''@nni.variable(nni.choice(0.1, 0.01, 0.001), name=learning_rate)'''
learning_rate = 0.1
```
The meaning of this example is that NNI will choose one of several values (0.1, 0.01, 0.001) to assign to the learning_rate variable. Specifically, this first line is an NNI annotation, which is a single string. Following is an assignment statement. What nni does here is to replace the right value of this assignment statement according to the information provided by the annotation line.
xuehui1991 marked this conversation as resolved.
Show resolved Hide resolved
xuehui1991 marked this conversation as resolved.
Show resolved Hide resolved

3. Annotation intermediate in code as:

'''@nni.report_intermediate_result(test_acc)'''
In this way, users could either run the python code directly or launch NNI to tune hyper-parameter in this code, without changing any codes.

4. Annotation output in code as:
## Types of Annotation:

'''@nni.report_final_result(test_acc)'''
In NNI, there are mainly four types of annotation:

5. Annotation `function_choice` in code as:

'''@nni.function_choice(max_pool(h_conv1, self.pool_size),avg_pool(h_conv1, self.pool_size),name=max_pool)'''
### 1. Annotate variables

In this way, they can easily implement automatic tuning on NNI.
`'''@nni.variable(sampling_algo, name)'''`

For `@nni.variable`, `nni.choice` is the type of search space and there are 10 types to express your search space as follows:
`@nni.variable` is used in NNI to annotate a variable.

1. `@nni.variable(nni.choice(option1,option2,...,optionN),name=variable)`
Which means the variable value is one of the options, which should be a list The elements of options can themselves be stochastic expressions
**Arguments**

2. `@nni.variable(nni.randint(upper),name=variable)`
Which means the variable value is a random integer in the range [0, upper).
- **sampling_algo**: Sampling algorithm that specifies a search space. User should replace it with a built-in NNI sampling function whose name consists of an `nni.` identification and a search space type specified in [SearchSpaceSpec](SearchSpaceSpec.md) such as `choice` or `uninform`.
- **name**: The name of the variable that the selected value will be assigned to. Note that this argument should be the same as the left value of the following assignment statement.
leckie-chn marked this conversation as resolved.
Show resolved Hide resolved

3. `@nni.variable(nni.uniform(low, high),name=variable)`
Which means the variable value is a value uniformly between low and high.
An example here is:

4. `@nni.variable(nni.quniform(low, high, q),name=variable)`
Which means the variable value is a value like round(uniform(low, high) / q) * q
```python
'''@nni.variable(nni.choice(0.1, 0.01, 0.001), name=learning_rate)'''
learning_rate = 0.1
```

5. `@nni.variable(nni.loguniform(low, high),name=variable)`
Which means the variable value is a value drawn according to exp(uniform(low, high)) so that the logarithm of the return value is uniformly distributed.
### 2. Annotate functions

6. `@nni.variable(nni.qloguniform(low, high, q),name=variable)`
Which means the variable value is a value like round(exp(uniform(low, high)) / q) * q
`'''@nni.function_choice(*functions, name)'''`

7. `@nni.variable(nni.normal(label, mu, sigma),name=variable)`
Which means the variable value is a real value that's normally-distributed with mean mu and standard deviation sigma.
`@nni.function_choice` is used to choose one from several functions.

8. `@nni.variable(nni.qnormal(label, mu, sigma, q),name=variable)`
Which means the variable value is a value like round(normal(mu, sigma) / q) * q
**Arguments**

9. `@nni.variable(nni.lognormal(label, mu, sigma),name=variable)`
Which means the variable value is a value drawn according to exp(normal(mu, sigma))
- **\*functions**: Several functions that are waiting to be selected from. Note that it should be a complete function call with aruguments. Such as `max_pool(hidden_layer, pool_size)`.
leckie-chn marked this conversation as resolved.
Show resolved Hide resolved
- **name**: The name of the function that will be replace in the following assignment statement.
leckie-chn marked this conversation as resolved.
Show resolved Hide resolved

10. `@nni.variable(nni.qlognormal(label, mu, sigma, q),name=variable)`
Which means the variable value is a value like round(exp(normal(mu, sigma)) / q) * q
An example here is:

```python
"""@nni.function_choice(max_pool(hidden_layer, pool_size), avg_pool(hidden_layer, pool_size), name=max_pool)"""
h_pooling = max_pool(hidden_layer, pool_size)
```

### 3. Annotate intermediate result

`'''@nni.report_intermediate_result(metrics)'''`

`@nni.report_intermediate_result` is used to report itermediate result, whose usage is the same as `nni.report_intermediate_result` in [Trials.md](Trials.md)
leckie-chn marked this conversation as resolved.
Show resolved Hide resolved

### 4. Annotate final result

`'''@nni.report_final_result(metrics)'''`

`@nni.report_final_result` is used to report final result of the current trial, whose usage is the same as `nni.report_final_result` in [Trials.md](Trials.md)
leckie-chn marked this conversation as resolved.
Show resolved Hide resolved
78 changes: 78 additions & 0 deletions docs/Builtin_Assessors.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,78 @@
# Builtin Assessors
leckie-chn marked this conversation as resolved.
Show resolved Hide resolved

NNI provides the-state-of-art tuning algorithm in our builtin-assessors and makes them easy to use. Below is the brief overview of NNI current builtin Assessors:
leckie-chn marked this conversation as resolved.
Show resolved Hide resolved

|Assessor|Brief Introduction of Algorithm|
|---|---|
|**Medianstop**<br>[(Usage)](#MedianStop)|Medianstop is a simple early stopping rule mentioned in the [paper][1]. It stops a pending trial X at step S if the trial’s best objective value by step S is strictly worse than the median value of the running averages of all completed trials’ objectives reported up to step S.|It is applicable in a wide range of performance curves, thus, can be used in various scenarios to speed up the tuning progress.|
|[Curvefitting][2]<br>[(Usage)](#Curvefitting)|Curve Fitting Assessor is a LPA(learning, predicting, assessing) algorithm. It stops a pending trial X at step S if the prediction of final epoch's performance worse than the best final performance in the trial history. In this algorithm, we use 12 curves to fit the accuracy curve|It is applicable in a wide range of performance curves, thus, can be used in various scenarios to speed up the tuning progress. Even better, it's able to handle and assess curves with similar performance.|

<br>

## Usage of Builtin Assessors

Use builtin assessors provided by NNI sdk requires to declare the **builtinAssessorName** and **classArgs** in `config.yml` file. In this part, we will introduce the detailed usage about the suggested scenarios, classArg requirements, and example for each assessor.
leckie-chn marked this conversation as resolved.
Show resolved Hide resolved

Note: Please follow the format when you write your `config.yml` file.

<a name="MedianStop"></a>

![#1589F0](https://placehold.it/15/1589F0/000000?text=+) `Median Stop Assessor`

> Builtin Assessor Name: **Medianstop**

**Suggested scenario**

It is applicable in a wide range of performance curves, thus, can be used in various scenarios to speed up the tuning progress.

**Requirement of classArg**

* **optimize_mode** (*maximize or minimize, optional, default = maximize*) - If 'maximize', assessor will **stop** the trial with smaller expectation. If 'minimize', assessor will **stop** the trial with larger expectation.
* **start_step** (*int, optional, default = 0*) - A trial is determined to be stopped or not, only after receiving start_step number of reported intermediate results.

**Usage example:**

```yaml
# config.yml
assessor:
builtinAssessorName: Medianstop
classArgs:
optimize_mode: maximize
start_step: 5
```

<br>

<a name="Curvefitting"></a>

![#1589F0](https://placehold.it/15/1589F0/000000?text=+) `Curve Fitting Assessor`

> Builtin Assessor Name: **Curvefitting**

**Suggested scenario**

It is applicable in a wide range of performance curves, thus, can be used in various scenarios to speed up the tuning progress. Even better, it's able to handle and assess curves with similar performance.

**Requirement of classArg**

* **epoch_num** (*int, **required***) - The total number of epoch. We need to know the number of epoch to determine which point we need to predict.
* **optimize_mode** (*maximize or minimize, optional, default = maximize*) - If 'maximize', assessor will **stop** the trial with smaller expectation. If 'minimize', assessor will **stop** the trial with larger expectation.
* **start_step** (*int, optional, default = 6*) - A trial is determined to be stopped or not, we start to predict only after receiving start_step number of reported intermediate results.
* **threshold** (*float, optional, default = 0.95*) - The threshold that we decide to early stop the worse performance curve. For example: if threshold = 0.95, optimize_mode = maximize, best performance in the history is 0.9, then we will stop the trial which predict value is lower than 0.95 * 0.9 = 0.855.

**Usage example:**

```yaml
# config.yml
assessor:
builtinAssessorName: Curvefitting
classArgs:
epoch_num: 20
optimize_mode: maximize
start_step: 6
threshold: 0.95
```

[1]: https://static.googleusercontent.com/media/research.google.com/en//pubs/archive/46180.pdf
[2]: https://github.com/Microsoft/nni/blob/master/src/sdk/pynni/nni/curvefitting_assessor/README.md
[5]: https://github.com/Microsoft/nni/blob/master/examples/trials/mnist/config_assessor.yml
Loading