Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

merge master #298

Merged
merged 8 commits into from
May 26, 2021
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 2 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -348,6 +348,8 @@ Join IM discussion groups:
| OpenPAI | [![Build Status](https://msrasrg.visualstudio.com/NNIOpenSource/_apis/build/status/integration%20test%20-%20openpai%20-%20linux?branchName=master)](https://msrasrg.visualstudio.com/NNIOpenSource/_build/latest?definitionId=65&branchName=master) |
| Frameworkcontroller | [![Build Status](https://msrasrg.visualstudio.com/NNIOpenSource/_apis/build/status/integration%20test%20-%20frameworkcontroller?branchName=master)](https://msrasrg.visualstudio.com/NNIOpenSource/_build/latest?definitionId=70&branchName=master) |
| Kubeflow | [![Build Status](https://msrasrg.visualstudio.com/NNIOpenSource/_apis/build/status/integration%20test%20-%20kubeflow?branchName=master)](https://msrasrg.visualstudio.com/NNIOpenSource/_build/latest?definitionId=69&branchName=master) |
| Hybrid | [![Build Status](https://msrasrg.visualstudio.com/NNIOpenSource/_apis/build/status/integration%20test%20-%20hybrid?branchName=master)](https://msrasrg.visualstudio.com/NNIOpenSource/_build/latest?definitionId=79&branchName=master) |
| AzureML | [![Build Status](https://msrasrg.visualstudio.com/NNIOpenSource/_apis/build/status/integration%20test%20-%20aml?branchName=master)](https://msrasrg.visualstudio.com/NNIOpenSource/_build/latest?definitionId=78&branchName=master) |

## Related Projects

Expand Down
2 changes: 2 additions & 0 deletions dependencies/recommended.txt
Original file line number Diff line number Diff line change
Expand Up @@ -10,3 +10,5 @@ pytorch-lightning >= 1.1.1
onnx
peewee
graphviz
gym
tianshou >= 0.4.1
2 changes: 2 additions & 0 deletions dependencies/recommended_legacy.txt
Original file line number Diff line number Diff line change
Expand Up @@ -11,3 +11,5 @@ keras == 2.1.6
onnx
peewee
graphviz
gym
tianshou >= 0.4.1
29 changes: 16 additions & 13 deletions docs/en_US/Compression/CompressionReference.rst
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,7 @@ Weight Masker
.. autoclass:: nni.algorithms.compression.pytorch.pruning.weight_masker.WeightMasker
:members:

.. autoclass:: nni.algorithms.compression.pytorch.pruning.structured_pruning.StructuredWeightMasker
.. autoclass:: nni.algorithms.compression.pytorch.pruning.structured_pruning_masker.StructuredWeightMasker
:members:


Expand All @@ -43,40 +43,40 @@ Pruners
.. autoclass:: nni.algorithms.compression.pytorch.pruning.sensitivity_pruner.SensitivityPruner
:members:

.. autoclass:: nni.algorithms.compression.pytorch.pruning.one_shot.OneshotPruner
.. autoclass:: nni.algorithms.compression.pytorch.pruning.one_shot_pruner.OneshotPruner
:members:

.. autoclass:: nni.algorithms.compression.pytorch.pruning.one_shot.LevelPruner
.. autoclass:: nni.algorithms.compression.pytorch.pruning.one_shot_pruner.LevelPruner
:members:

.. autoclass:: nni.algorithms.compression.pytorch.pruning.one_shot.SlimPruner
.. autoclass:: nni.algorithms.compression.pytorch.pruning.one_shot_pruner.L1FilterPruner
:members:

.. autoclass:: nni.algorithms.compression.pytorch.pruning.one_shot.L1FilterPruner
.. autoclass:: nni.algorithms.compression.pytorch.pruning.one_shot_pruner.L2FilterPruner
:members:

.. autoclass:: nni.algorithms.compression.pytorch.pruning.one_shot.L2FilterPruner
.. autoclass:: nni.algorithms.compression.pytorch.pruning.one_shot_pruner.FPGMPruner
:members:

.. autoclass:: nni.algorithms.compression.pytorch.pruning.one_shot.FPGMPruner
.. autoclass:: nni.algorithms.compression.pytorch.pruning.iterative_pruner.IterativePruner
:members:

.. autoclass:: nni.algorithms.compression.pytorch.pruning.one_shot.TaylorFOWeightFilterPruner
.. autoclass:: nni.algorithms.compression.pytorch.pruning.iterative_pruner.SlimPruner
:members:

.. autoclass:: nni.algorithms.compression.pytorch.pruning.one_shot.ActivationAPoZRankFilterPruner
.. autoclass:: nni.algorithms.compression.pytorch.pruning.iterative_pruner.TaylorFOWeightFilterPruner
:members:

.. autoclass:: nni.algorithms.compression.pytorch.pruning.one_shot.ActivationMeanRankFilterPruner
.. autoclass:: nni.algorithms.compression.pytorch.pruning.iterative_pruner.ActivationAPoZRankFilterPruner
:members:

.. autoclass:: nni.algorithms.compression.pytorch.pruning.lottery_ticket.LotteryTicketPruner
.. autoclass:: nni.algorithms.compression.pytorch.pruning.iterative_pruner.ActivationMeanRankFilterPruner
:members:

.. autoclass:: nni.algorithms.compression.pytorch.pruning.agp.AGPPruner
.. autoclass:: nni.algorithms.compression.pytorch.pruning.iterative_pruner.AGPPruner
:members:

.. autoclass:: nni.algorithms.compression.pytorch.pruning.admm_pruner.ADMMPruner
.. autoclass:: nni.algorithms.compression.pytorch.pruning.iterative_pruner.ADMMPruner
:members:

.. autoclass:: nni.algorithms.compression.pytorch.pruning.auto_compress_pruner.AutoCompressPruner
Expand All @@ -88,6 +88,9 @@ Pruners
.. autoclass:: nni.algorithms.compression.pytorch.pruning.simulated_annealing_pruner.SimulatedAnnealingPruner
:members:

.. autoclass:: nni.algorithms.compression.pytorch.pruning.lottery_ticket.LotteryTicketPruner
:members:


Quantizers
^^^^^^^^^^
Expand Down
4 changes: 2 additions & 2 deletions docs/en_US/Compression/CustomizeCompressor.rst
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@ An implementation of ``weight masker`` may look like this:
# mask = ...
return {'weight_mask': mask}

You can reference nni provided :githublink:`weight masker <nni/algorithms/compression/pytorch/pruning/structured_pruning.py>` implementations to implement your own weight masker.
You can reference nni provided :githublink:`weight masker <nni/algorithms/compression/pytorch/pruning/structured_pruning_masker.py>` implementations to implement your own weight masker.

A basic ``pruner`` looks likes this:

Expand All @@ -52,7 +52,7 @@ A basic ``pruner`` looks likes this:
wrapper.if_calculated = True
return masks

Reference nni provided :githublink:`pruner <nni/algorithms/compression/pytorch/pruning/one_shot.py>` implementations to implement your own pruner class.
Reference nni provided :githublink:`pruner <nni/algorithms/compression/pytorch/pruning/one_shot_pruner.py>` implementations to implement your own pruner class.

----

Expand Down
15 changes: 12 additions & 3 deletions docs/en_US/Compression/Overview.rst
Original file line number Diff line number Diff line change
Expand Up @@ -14,10 +14,19 @@ NNI provides a model compression toolkit to help user compress and speed up thei
* Provide friendly and easy-to-use compression utilities for users to dive into the compression process and results.
* Concise interface for users to customize their own compression algorithms.


Compression Pipeline
--------------------

.. image:: ../../img/compression_flow.jpg
:target: ../../img/compression_flow.jpg
:alt:

The overall compression pipeline in NNI. For compressing a pretrained model, pruning and quantization can be used alone or in combination.

.. note::
Since NNI compression algorithms are not meant to compress model while NNI speedup tool can truly compress model and reduce latency. To obtain a truly compact model, users should conduct `model speedup <./ModelSpeedup.rst>`__. The interface and APIs are unified for both PyTorch and TensorFlow, currently only PyTorch version has been supported, TensorFlow version will be supported in future.


Supported Algorithms
--------------------

Expand All @@ -26,7 +35,7 @@ The algorithms include pruning algorithms and quantization algorithms.
Pruning Algorithms
^^^^^^^^^^^^^^^^^^

Pruning algorithms compress the original network by removing redundant weights or channels of layers, which can reduce model complexity and address the over-fitting issue.
Pruning algorithms compress the original network by removing redundant weights or channels of layers, which can reduce model complexity and mitigate the over-fitting issue.

.. list-table::
:header-rows: 1
Expand Down Expand Up @@ -96,6 +105,7 @@ Model Speedup

The final goal of model compression is to reduce inference latency and model size. However, existing model compression algorithms mainly use simulation to check the performance (e.g., accuracy) of compressed model, for example, using masks for pruning algorithms, and storing quantized values still in float32 for quantization algorithms. Given the output masks and quantization bits produced by those algorithms, NNI can really speed up the model. The detailed tutorial of Masked Model Speedup can be found `here <./ModelSpeedup.rst>`__, The detailed tutorial of Mixed Precision Quantization Model Speedup can be found `here <./QuantizationSpeedup.rst>`__.


Compression Utilities
---------------------

Expand All @@ -110,7 +120,6 @@ NNI model compression leaves simple interface for users to customize a new compr
Reference and Feedback
----------------------


* To `report a bug <https://github.com/microsoft/nni/issues/new?template=bug-report.rst>`__ for this feature in GitHub;
* To `file a feature or improvement request <https://github.com/microsoft/nni/issues/new?template=enhancement.rst>`__ for this feature in GitHub;
* To know more about `Feature Engineering with NNI <../FeatureEngineering/Overview.rst>`__\ ;
Expand Down
21 changes: 6 additions & 15 deletions docs/en_US/Compression/Pruner.rst
Original file line number Diff line number Diff line change
@@ -1,15 +1,11 @@
Supported Pruning Algorithms on NNI
===================================

We provide several pruning algorithms that support fine-grained weight pruning and structural filter pruning. **Fine-grained Pruning** generally results in unstructured models, which need specialized hardware or software to speed up the sparse network. **Filter Pruning** achieves acceleration by removing the entire filter. Some pruning algorithms use one-shot method that prune weights at once based on an importance metric. Other pruning algorithms control the **pruning schedule** that prune weights during optimization, including some automatic pruning algorithms.
We provide several pruning algorithms that support fine-grained weight pruning and structural filter pruning. **Fine-grained Pruning** generally results in unstructured models, which need specialized hardware or software to speed up the sparse network. **Filter Pruning** achieves acceleration by removing the entire filter. Some pruning algorithms use one-shot method that prune weights at once based on an importance metric (It is necessary to finetune the model to compensate for the loss of accuracy). Other pruning algorithms **iteratively** prune weights during optimization, which control the pruning schedule, including some automatic pruning algorithms.


**Fine-grained Pruning**

* `Level Pruner <#level-pruner>`__

**Filter Pruning**

**One-shot Pruning**
* `Level Pruner <#level-pruner>`__ ((fine-grained pruning))
* `Slim Pruner <#slim-pruner>`__
* `FPGM Pruner <#fpgm-pruner>`__
* `L1Filter Pruner <#l1filter-pruner>`__
Expand All @@ -18,18 +14,17 @@ We provide several pruning algorithms that support fine-grained weight pruning a
* `Activation Mean Rank Filter Pruner <#activationmeanrankfilter-pruner>`__
* `Taylor FO On Weight Pruner <#taylorfoweightfilter-pruner>`__

**Pruning Schedule**
**Iteratively Pruning**

* `AGP Pruner <#agp-pruner>`__
* `NetAdapt Pruner <#netadapt-pruner>`__
* `SimulatedAnnealing Pruner <#simulatedannealing-pruner>`__
* `AutoCompress Pruner <#autocompress-pruner>`__
* `AMC Pruner <#amc-pruner>`__
* `Sensitivity Pruner <#sensitivity-pruner>`__
* `ADMM Pruner <#admm-pruner>`__

**Others**

* `ADMM Pruner <#admm-pruner>`__
* `Lottery Ticket Hypothesis <#lottery-ticket-hypothesis>`__

Level Pruner
Expand Down Expand Up @@ -382,11 +377,7 @@ PyTorch code

from nni.algorithms.compression.pytorch.pruning import AGPPruner
config_list = [{
'initial_sparsity': 0,
'final_sparsity': 0.8,
'start_epoch': 0,
'end_epoch': 10,
'frequency': 1,
'sparsity': 0.8,
'op_types': ['default']
}]

Expand Down
12 changes: 11 additions & 1 deletion docs/en_US/NAS/retiarii/Advanced.rst
Original file line number Diff line number Diff line change
@@ -1,7 +1,17 @@
Advanced Tutorial
=================

This document includes two parts. The first part explains the design decision of ``@basic_unit`` and ``serializer``. The second part is the tutorial of how to write a model space with mutators.
Pure-python execution engine (experimental)
-------------------------------------------

If you are experiencing issues with TorchScript, or the generated model code by Retiarii, there is another execution engine called Pure-python execution engine which doesn't need the code-graph conversion. This should generally not affect models and strategies in most cases, but customized mutation might not be supported.

This will come as the default execution engine in future version of Retiarii.

Two steps are needed to enable this engine now.

1. Add ``@nni.retiarii.model_wrapper`` decorator outside the whole PyTorch model.
2. Add ``config.execution_engine = 'py'`` to ``RetiariiExeConfig``.

``@basic_unit`` and ``serializer``
----------------------------------
Expand Down
6 changes: 6 additions & 0 deletions docs/en_US/NAS/retiarii/ApiReference.rst
Original file line number Diff line number Diff line change
Expand Up @@ -18,6 +18,12 @@ Inline Mutation APIs
.. autoclass:: nni.retiarii.nn.pytorch.ChosenInputs
:members:

.. autoclass:: nni.retiarii.nn.pytorch.Repeat
:members:

.. autoclass:: nni.retiarii.nn.pytorch.Cell
:members:

Graph Mutation APIs
-------------------

Expand Down
Loading