Skip to content

Commit

Permalink
[DOC] Improve "Getting Started with TVM" tutorials and fix warnings (a…
Browse files Browse the repository at this point in the history
…pache#8221)

* improve src/README.md

* fix intro

* fix more warnings

* improve docs

* update

* update

* update

* update overview image
  • Loading branch information
merrymercy authored Jun 9, 2021
1 parent 8a04efa commit 53e4c60
Show file tree
Hide file tree
Showing 23 changed files with 120 additions and 105 deletions.
2 changes: 1 addition & 1 deletion docs/api/python/graph_executor.rst
Original file line number Diff line number Diff line change
Expand Up @@ -16,6 +16,6 @@
under the License.
tvm.contrib.graph_executor
-------------------------
--------------------------
.. automodule:: tvm.contrib.graph_executor
:members:
8 changes: 4 additions & 4 deletions docs/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -226,10 +226,10 @@ def git_describe_version(original_version):
"introduction.py",
"install.py",
"tvmc_command_line_driver.py",
"auto_tuning_with_python.py",
"autotvm_relay_x86.py",
"tensor_expr_get_started.py",
"autotvm_matmul.py",
"autoschedule_matmul.py",
"autotvm_matmul_x86.py",
"auto_scheduler_matmul_x86.py",
"cross_compilation_and_rpc.py",
"relay_quick_start.py",
],
Expand All @@ -246,7 +246,7 @@ def git_describe_version(original_version):
],
"language": [
"schedule_primitives.py",
"reduciton.py",
"reduction.py",
"intrin_math.py",
"scan.py",
"extern_op.py",
Expand Down
3 changes: 2 additions & 1 deletion docs/deploy/bnns.rst
Original file line number Diff line number Diff line change
Expand Up @@ -175,7 +175,8 @@ Operator support
| nn.bias_add | Supported by BNNS integration only as a bias part of nn.conv2d or nn.dense |
| | fusion |
+------------------------+------------------------------------------------------------------------------+
| add | Supported by BNNS integration only as a bias part of nn.conv2d or nn.dense fusion |
| add | Supported by BNNS integration only as a bias part of nn.conv2d or nn.dense |
| | fusion |
+------------------------+------------------------------------------------------------------------------+
| nn.relu | Supported by BNNS integration only as a part of nn.conv2d or nn.dense fusion |
+------------------------+------------------------------------------------------------------------------+
Expand Down
2 changes: 1 addition & 1 deletion docs/dev/device_target_interactions.rst
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@
.. _tvm-target-specific-overview:

Device/Target Interactions
--------------------------
==========================

This documented is intended for developers interested in understanding
how the TVM framework interacts with specific device APIs, or who
Expand Down
12 changes: 7 additions & 5 deletions docs/dev/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -29,10 +29,6 @@ This page is organized as follows:
The sections after are specific guides focused on each logical component, organized
by the component's name.

- The `Device/Target Interactions`_ section describes how TVM
interacts with each supported physical device and code-generation
target.

- Feel free to also check out the :ref:`dev-how-to` for useful development tips.

This guide provides a few complementary views of the architecture.
Expand Down Expand Up @@ -245,12 +241,13 @@ for learning-based optimizations.


.. toctree::
:maxdepth: 2
:maxdepth: 1

runtime
debugger
virtual_machine
introduction_to_module_serialization
device_target_interactions

tvm/node
--------
Expand Down Expand Up @@ -318,6 +315,11 @@ It also provides a common `Target` class that describes the target.
The compilation pipeline can be customized according to the target by querying the attribute information
in the target and builtin information registered to each target id(cuda, opencl).

.. toctree::
:maxdepth: 1

device_target_interactions

tvm/tir
-------

Expand Down
4 changes: 2 additions & 2 deletions docs/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -44,8 +44,8 @@ For Developers
contribute/index
deploy/index
dev/how_to
microtvm/index
errors
faq

.. toctree::
:maxdepth: 1
Expand Down Expand Up @@ -76,8 +76,8 @@ For Developers
:hidden:
:caption: MISC

microtvm/index
vta/index
faq


Index
Expand Down
1 change: 1 addition & 0 deletions python/tvm/relay/op/nn/nn.py
Original file line number Diff line number Diff line change
Expand Up @@ -2236,6 +2236,7 @@ def sparse_add(dense_mat, sparse_mat):
Examples
-------
.. code-block:: python
dense_data = [[ 3., 4., 4. ]
[ 4., 2., 5. ]]
sparse_data = [4., 8.]
Expand Down
7 changes: 6 additions & 1 deletion python/tvm/relay/op/transform.py
Original file line number Diff line number Diff line change
Expand Up @@ -1404,6 +1404,7 @@ def sparse_fill_empty_rows(sparse_indices, sparse_values, dense_shape, default_v
Examples
-------
.. code-block:: python
sparse_indices = [[0, 1],
[0, 3],
[2, 0],
Expand All @@ -1425,7 +1426,6 @@ def sparse_fill_empty_rows(sparse_indices, sparse_values, dense_shape, default_v
[4, 0]]
empty_row_indicator = [False, True, False, False, True]
new_sparse_values = [1, 2, 10, 3, 4, 10]
"""
new_sparse_indices, new_sparse_values, empty_row_indicator = TupleWrapper(
_make.sparse_fill_empty_rows(sparse_indices, sparse_values, dense_shape, default_value), 3
Expand Down Expand Up @@ -1457,6 +1457,7 @@ def sparse_reshape(sparse_indices, prev_shape, new_shape):
Examples
--------
.. code-block:: python
sparse_indices = [[0, 0, 0],
[0, 0, 1],
[0, 1, 0],
Expand Down Expand Up @@ -1508,6 +1509,7 @@ def segment_sum(data, segment_ids, num_segments=None):
Examples
--------
.. code-block:: python
data = [[1, 2, 3, 4],
[4, -3, 2, -1],
[5, 6, 7, 8]]
Expand Down Expand Up @@ -1578,6 +1580,7 @@ def cumsum(data, axis=None, dtype=None, exclusive=None):
Examples
--------
.. code-block:: python
a = [[1,2,3], [4,5,6]]
cumsum(a) # if axis is not provided, cumsum is done over the flattened input.
Expand Down Expand Up @@ -1633,6 +1636,7 @@ def cumprod(data, axis=None, dtype=None, exclusive=None):
Examples
--------
.. code-block:: python
a = [[1,2,3], [4,5,6]]
cumprod(a) # if axis is not provided, cumprod is done over the flattened input.
Expand Down Expand Up @@ -1693,6 +1697,7 @@ def unique(data, is_sorted=True, return_counts=False):
Examples
--------
.. code-block:: python
[output, indices, num_unique] = unique([4, 5, 1, 2, 3, 3, 4, 5], False, False)
output = [4, 5, 1, 2, 3, ?, ?, ?]
indices = [0, 1, 2, 3, 4, 4, 0, 1]
Expand Down
1 change: 1 addition & 0 deletions python/tvm/topi/cuda/sparse_reshape.py
Original file line number Diff line number Diff line change
Expand Up @@ -48,6 +48,7 @@ def sparse_reshape(
Examples
--------
.. code-block:: python
sparse_indices = [[0, 0, 0],
[0, 0, 1],
[0, 1, 0],
Expand Down
1 change: 1 addition & 0 deletions python/tvm/topi/cuda/unique.py
Original file line number Diff line number Diff line change
Expand Up @@ -317,6 +317,7 @@ def unique(data, is_sorted=True, return_counts=False):
Examples
--------
.. code-block:: python
[output, indices, num_unique] = unique([4, 5, 1, 2, 3, 3, 4, 5], False, False)
output = [4, 5, 1, 2, 3, ?, ?, ?]
indices = [0, 1, 2, 3, 4, ?, ?, ?]
Expand Down
1 change: 1 addition & 0 deletions python/tvm/topi/sparse_reshape.py
Original file line number Diff line number Diff line change
Expand Up @@ -45,6 +45,7 @@ def sparse_reshape(
Examples
--------
.. code-block:: python
sparse_indices = [[0, 0, 0],
[0, 0, 1],
[0, 1, 0],
Expand Down
1 change: 1 addition & 0 deletions python/tvm/topi/unique.py
Original file line number Diff line number Diff line change
Expand Up @@ -243,6 +243,7 @@ def unique(data, is_sorted=True, return_counts=False):
Examples
--------
.. code-block:: python
[output, indices, num_unique] = unique([4, 5, 1, 2, 3, 3, 4, 5], False, False)
output = [4, 5, 1, 2, 3, ?, ?, ?]
indices = [0, 1, 2, 3, 4, ?, ?, ?]
Expand Down
19 changes: 11 additions & 8 deletions src/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,14 +21,17 @@ Header files in include are public APIs that share across modules.
There can be internal header files within each module that sit in src.

## Modules
- support: Internal support utilities.
- runtime: Minimum runtime related codes.
- node: base infra for IR/AST nodes that is dialect independent.
- ir: Common IR infrastructure.
- tir: Tensor-level IR.
- te: tensor expression DSL
- arith: Arithmetic expression and set simplification.
- relay: Relay IR, high-level optimization.
- autotvm: The auto-tuning module.
- auto\_scheduler: The template-free auto-tuning module.
- autotvm: The template-based auto-tuning module.
- contrib: Contrib extension libraries.
- driver: Compilation driver APIs.
- ir: Common IR infrastructure.
- node: The base infra for IR/AST nodes that is dialect independent.
- relay: Relay IR, high-level optimizations.
- runtime: Minimum runtime related codes.
- support: Internal support utilities.
- target: Hardwaer target.
- tir: Tensor IR, low-level optimizations.
- te: Tensor expression DSL.
- topi: Tensor Operator Inventory.
2 changes: 1 addition & 1 deletion tutorials/auto_scheduler/tune_network_arm.py
Original file line number Diff line number Diff line change
Expand Up @@ -437,7 +437,7 @@ def tune_and_evaluate():
# in function :code:`run_tuning`. Say,
# :code:`tuner = auto_scheduler.TaskScheduler(tasks, task_weights, load_log_file=log_file)`
# 4. If you have multiple target CPUs, you can use all of them for measurements to
# parallelize the measurements. Check this :ref:`section <tutorials-autotvm-rpc-tracker>`
# parallelize the measurements. Check this :ref:`section <tutorials-autotvm-scale-up-rpc-tracker>`
# to learn how to use the RPC Tracker and RPC Server.
# To use the RPC Tracker in auto-scheduler, replace the runner in :code:`TuningOptions`
# with :any:`auto_scheduler.RPCRunner`.
2 changes: 1 addition & 1 deletion tutorials/autotvm/tune_conv2d_cuda.py
Original file line number Diff line number Diff line change
Expand Up @@ -77,7 +77,7 @@
# to tune other operators such as depthwise convolution and gemm.
# In order to fully understand this template, you should be familiar with
# the schedule primitives and auto tuning API. You can refer to the above
# tutorials and :doc:`autotvm tutorial <tune_simple_template>`
# tutorials and :ref:`autotvm tutorial <tutorial-autotvm-matmul-x86>`
#
# It is worth noting that the search space for a conv2d operator
# can be very large (at the level of 10^9 for some input shapes)
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@
In this tutorial, we will show how TVM's Auto Scheduling feature can find
optimal schedules without the need for writing a custom template.
Different from the template-based :ref:`<autotvm_matmul>` which relies on
Different from the template-based :doc:`AutoTVM <autotvm_matmul_x86>` which relies on
manual templates to define the search space, the auto-scheduler does not
require any templates. Users only need to write the computation declaration
without any schedule commands or templates. The auto-scheduler can
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -15,17 +15,18 @@
# specific language governing permissions and limitations
# under the License.
"""
Optimizing Operators with Templates and AutoTVM
===============================================
.. _tutorial-autotvm-matmul-x86:
Optimizing Operators with Schedule Templates and AutoTVM
========================================================
**Authors**:
`Lianmin Zheng <https://github.com/merrymercy>`_,
`Chris Hoge <https://github.com/hogepodge>`_
In this tutorial, we will now show how the TVM Template Extension (TE) language
can be used to write scheduling templates that can be searched by AutoTVM to
find optimal configurations of scheduling variables. This process is called
Auto-Tuning, and builds on TE to help automate the process of optimizing
operations.
In this tutorial, we show how the TVM Tensor Expression (TE) language
can be used to write schedule templates that can be searched by AutoTVM to
find the optimal schedule. This process is called Auto-Tuning, which helps
automate the process of optimizing tensor computation.
This tutorial builds on the previous `tutorial on how to write a matrix
multiplication using TE <tensor_expr_get_started>`.
Expand Down Expand Up @@ -371,6 +372,6 @@ def matmul(N, L, M, dtype):
# To gain a deeper understanding of how this works, we recommend expanding on
# this example by adding new search parameters to the schedule based on
# schedule operations demonstated in the `Getting Started With Tensor
# Expressions <tensor_expr_get_started>_` tutorial In the upcoming sections, we
# Expressions <tensor_expr_get_started>_` tutorial. In the upcoming sections, we
# will demonstate the AutoScheduler, a method for TVM to optimize common
# operators without the need for the user to provide a user-defined template.
Original file line number Diff line number Diff line change
Expand Up @@ -15,8 +15,8 @@
# specific language governing permissions and limitations
# under the License.
"""
Compiling and Optimizing a Model with the Python AutoScheduler
==============================================================
Compiling and Optimizing a Model with the Python Interface (AutoTVM)
====================================================================
**Author**:
`Chris Hoge <https://github.com/hogepodge>`_
Expand Down Expand Up @@ -302,6 +302,7 @@
repeat=repeat,
timeout=timeout,
min_repeat_ms=min_repeat_ms,
enable_cpu_cache_flush=True,
)

# Create a simple structure for holding tuning options. We use an XGBoost
Expand Down
4 changes: 2 additions & 2 deletions tutorials/get_started/install.py
Original file line number Diff line number Diff line change
Expand Up @@ -23,8 +23,8 @@
Depending on your needs and your working environment, there are a few different
methods for installing TVM. These include:
* Installing from source
* Installing from third-party binary package.
* Installing from source
* Installing from third-party binary package.
"""

################################################################################
Expand Down
Loading

0 comments on commit 53e4c60

Please sign in to comment.