-
Notifications
You must be signed in to change notification settings - Fork 1.8k
update document #92
update document #92
Changes from 6 commits
b6f28b7
05a2fe4
f6d6333
d5417b5
8262fad
aa41929
56bb314
7935f1e
f07cc64
0840fb4
6148229
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,9 +1,14 @@ | ||
**Write a Trial which can Run on NNI** | ||
**Write a Trial Run on NNI** | ||
=== | ||
There would be only a few changes on your existing trial(model) code to make the code runnable on NNI. We provide two approaches for you to modify your code: `Python annotation` and `NNI APIs for trial` | ||
|
||
## NNI APIs | ||
We also support NNI APIs for trial code. By using this approach, you should first prepare a search space file. An example is shown below: | ||
A **Trial** in NNI is an individual attempts at applying a set of parameters on a model. | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. individual? or a experiment? (only individual concept in evolution??? |
||
|
||
To define a NNI trial, you need to firstly define the set of parameters and then update the model. NNI provide two approaches for you to define a trial: `NNI API` and `NNI Python annotation`. | ||
|
||
## NNI API | ||
>Step 1 - Prepare a SearchSpace parameters file. | ||
|
||
An example is shown below: | ||
``` | ||
{ | ||
"dropout_rate":{"_type":"uniform","_value":[0.1,0.5]}, | ||
|
@@ -12,32 +17,71 @@ We also support NNI APIs for trial code. By using this approach, you should firs | |
"learning_rate":{"_type":"uniform","_value":[0.0001, 0.1]} | ||
} | ||
``` | ||
You can refer to [here](SearchSpaceSpec.md) for the tutorial of search space. | ||
Refer to [Search Spece Spec](SearchSpaceSpec.md) to learn more about search space. | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I think Refer to [Search space] better. Also here is a typo "Spece" |
||
|
||
Then, include `import nni` in your trial code to use NNI APIs. Using the line: | ||
``` | ||
RECEIVED_PARAMS = nni.get_parameters() | ||
``` | ||
to get hyper-parameters' values assigned by tuner. `RECEIVED_PARAMS` is an object, for example: | ||
``` | ||
{"conv_size": 2, "hidden_size": 124, "learning_rate": 0.0307, "dropout_rate": 0.2029} | ||
``` | ||
>Step 2 - Update model codes | ||
~~~~ | ||
2.1 Claim NNI API | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Claim? (I'm not sure.... |
||
Include `import nni` in your trial code to use NNI APIs. | ||
|
||
2.2 Get predefined parameters | ||
Use the following code snippet: | ||
|
||
RECEIVED_PARAMS = nni.get_parameters() | ||
|
||
to get hyper-parameters' values assigned by tuner. `RECEIVED_PARAMS` is an object, for example: | ||
|
||
{"conv_size": 2, "hidden_size": 124, "learning_rate": 0.0307, "dropout_rate": 0.2029} | ||
|
||
2.3 Report NNI results | ||
Use the API: | ||
|
||
On the other hand, you can use the API: `nni.report_intermediate_result(accuracy)` to send `accuracy` to assessor. And use `nni.report_final_result(accuracy)` to send `accuracy` to tuner. Here `accuracy` could be any python data type, but **NOTE that if you use built-in tuner/assessor, `accuracy` should be a numerical variable(e.g. float, int)**. | ||
`nni.report_intermediate_result(accuracy)` | ||
|
||
to send `accuracy` to assessor. | ||
|
||
Use the API: | ||
|
||
The assessor will decide which trial should early stop based on the history performance of trial(intermediate result of one trial). | ||
The tuner will generate next parameters/architecture based on the explore history(final result of all trials). | ||
`nni.report_final_result(accuracy)` | ||
|
||
to send `accuracy` to tuner. | ||
~~~~ | ||
|
||
**NOTE**: | ||
~~~~ | ||
accuracy - The `accuracy` could be any python data type, but if you use NNI built-in tuner/assessor, `accuracy` should be a numerical variable (e.g. float, int). | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. python object instead "python data type?" |
||
assessor - The assessor will decide which trial should early stop based on the history performance of trial (intermediate result of one trial). | ||
tuner - The tuner will generate next parameters/architecture based on the explore history (final result of all trials). | ||
~~~~ | ||
|
||
>Step 3 - Enable NNI API | ||
|
||
To enable NNI API mode, you need to set useAnnotation to *false* and provide the path of SearchSpace file (you just defined in step 1): | ||
|
||
In the yaml configure file, you need two lines to enable NNI APIs: | ||
``` | ||
useAnnotation: false | ||
searchSpacePath: /path/to/your/search_space.json | ||
``` | ||
|
||
You can refer to [here](../examples/trials/README.md) for more information about how to write trial code using NNI APIs. | ||
You can refer to [here](ExperimentConfig.md) for more information about how to set up experiment configurations. | ||
|
||
(../examples/trials/README.md) for more information about how to write trial code using NNI APIs. | ||
|
||
## NNI Python Annotation | ||
An alternative to write a trial is to use NNI's syntax for python. Simple as any annotation, NNI annotation is working like comments in your codes. You don't have to make structure or any other big changes to your existing codes. With a few lines of NNI annotation, you will be able to: | ||
* annotate the variables you want to tune | ||
* specify in which range you want to tune the variables | ||
* annotate which variable you want to report as intermediate result to `assessor` | ||
* annotate which variable you want to report as the final result (e.g. model accuracy) to `tuner`. | ||
|
||
Again, take MNIST as an example, it only requires 2 steps to write a trial with NNI Annotation. | ||
|
||
>Step 1 - Update codes with annotations | ||
|
||
Please refer the following tensorflow code snippet for NNI Annotation, the highlighted 4 lines are annotations that help you to: (1) tune batch\_size and (2) dropout\_rate, (3) report test\_acc every 100 steps, and (4) at last report test\_acc as final result. | ||
|
||
>What noteworthy is: as these new added codes are annotations, it does not actually change your previous codes running dependencies or outputs, you can still run your code as usual in environments without NNI installed. | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. change your previous codes running dependencies or outputs. "change any your previous codes logic" better? (just a little suggestion... |
||
|
||
## NNI Annotation | ||
We designed a new syntax for users to annotate the variables they want to tune and in what range they want to tune the variables. Also, they can annotate which variable they want to report as intermediate result to `assessor`, and which variable to report as the final result (e.g. model accuracy) to `tuner`. A really appealing feature of our NNI annotation is that it exists as comments in your code, which means you can run your code as before without NNI. Let's look at an example, below is a piece of tensorflow code: | ||
```diff | ||
with tf.Session() as sess: | ||
sess.run(tf.global_variables_initializer()) | ||
|
@@ -64,14 +108,16 @@ with tf.Session() as sess: | |
+ """@nni.report_final_result(test_acc)""" | ||
``` | ||
|
||
Let's say you want to tune batch\_size and dropout\_rate, and report test\_acc every 100 steps, at last report test\_acc as final result. With our NNI annotation, your code would look like below: | ||
>NOTE | ||
>>`@nni.variable` will take effect on its following line | ||
>> | ||
>>`@nni.report_intermediate_result`/`@nni.report_final_result` will send the data to assessor/tuner at that line. | ||
>> | ||
>>Please refer to [Annotation README](../tools/annotation/README.md) for more information about annotation syntax and its usage. | ||
|
||
|
||
Simply adding four lines would make your code runnable on NNI. You can still run your code independently. `@nni.variable` works on its next line assignment, and `@nni.report_intermediate_result`/`@nni.report_final_result` would send the data to assessor/tuner at that line. Please refer to [here](../tools/annotation/README.md) for more annotation syntax and more powerful usage. In the yaml configure file, you need one line to enable NNI annotation: | ||
>Step 2 - Enable NNI Annotation | ||
In the yaml configure file, you need to set *useAnnotation* to true to enable NNI annotation: | ||
``` | ||
useAnnotation: true | ||
``` | ||
|
||
For users to correctly leverage NNI annotation, we briefly introduce how NNI annotation works here: NNI precompiles users' trial code to find all the annotations each of which is one line with `"""@nni` at the head of the line. Then NNI replaces each annotation with a corresponding NNI API at the location where the annotation is. | ||
|
||
**Note that: in your trial code, you can use either one of NNI APIs and NNI annotation, but not both of them simultaneously.** | ||
``` |
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -2,7 +2,7 @@ authorName: default | |
experimentName: example_mnist | ||
trialConcurrency: 1 | ||
maxExecDuration: 1h | ||
maxTrialNum: 1 | ||
maxTrialNum: 100 | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. may have conflict with zejun's pr |
||
#choice: local, remote | ||
trainingServicePlatform: local | ||
searchSpacePath: ~/nni/examples/trials/mnist/search_space.json | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pip?