Skip to content

Commit

Permalink
Fix direct and broken links (#9314)
Browse files Browse the repository at this point in the history
Updates links to use references instead of direct links, fixing
broken links and making all internal docs links more durable to
refactoring
  • Loading branch information
Chris Hoge authored Oct 19, 2021
1 parent e7a0c5c commit 0147b04
Show file tree
Hide file tree
Showing 12 changed files with 39 additions and 39 deletions.
18 changes: 8 additions & 10 deletions docs/dev/how_to/relay_add_op.rst
Original file line number Diff line number Diff line change
Expand Up @@ -190,18 +190,16 @@ useful for fusing operators. ``kOpaque`` tells TVM to not bother trying to fuse
While we've now defined the interface for our operations we still need to define
how to perform the actual calculations for cumulative sum and product.

Writing this code is outside the scope of the tutorial. For now, we assume
we have a well tested implementation for the operation's compute. For
more details on how to do this, we recommend looking up the tutorials
on `tensor expressions`_, `TVM's operator inventory (topi)`_ and looking at the
example cumulative sum and product implementations found in `python/tvm/topi/scan.py`_
and the gpu versions in `python/tvm/topi/cuda/scan.py`_. In the case of our cumulative
sum and product operations we write things directly in `TIR`_ which is the
Writing this code is outside the scope of the tutorial. For now, we assume we
have a well tested implementation for the operation's compute. For more details
on how to do this, we recommend looking up the tutorials on :ref:`tensor
expressions <tutorial-tensor-expr-get-started>`, :ref:`TVM's operator inventory
(topi) <tutorial-topi>` and looking at the example cumulative sum and product
implementations found in `python/tvm/topi/scan.py`_ and the gpu versions in
`python/tvm/topi/cuda/scan.py`_. In the case of our cumulative sum and product
operations we write things directly in :ref:`TIR <api-python-tir>` which is the
representation where tensor expressions and topi will lower into.

.. _tensor expressions: https://tvm.apache.org/docs/tutorials/get_started/tensor_expr_get_started.html
.. _TVM's operator inventory (topi): https://tvm.apache.org/docs/tutorials/topi/intro_topi.html
.. _TIR: https://tvm.apache.org/docs/dev/index.html?highlight=tir#tvm-tir
.. _python/tvm/topi/scan.py: https://github.com/apache/tvm/blob/main/python/tvm/topi/scan.py
.. _python/tvm/topi/cuda/scan.py: https://github.com/apache/tvm/blob/main/python/tvm/topi/cuda/scan.py

Expand Down
6 changes: 3 additions & 3 deletions docs/how_to/deploy/arm_compute_lib.rst
Original file line number Diff line number Diff line change
Expand Up @@ -142,9 +142,9 @@ Export the module.
lib.export_library(lib_path, cc=cross_compile)
Run Inference. This must be on an Arm device. If compiling on x86 device and running on AArch64,
consider using the RPC mechanism. Tutorials for using the RPC mechanism:
https://tvm.apache.org/docs/tutorials/get_started/cross_compilation_and_rpc.html
Run Inference. This must be on an Arm device. If compiling on x86 device and
running on AArch64, consider using the RPC mechanism. :ref:`Tutorials for using
the RPC mechanism <tutorial-cross-compilation-and-rpc>`

.. code:: python
Expand Down
2 changes: 2 additions & 0 deletions docs/reference/api/python/tir.rst
Original file line number Diff line number Diff line change
Expand Up @@ -15,6 +15,8 @@
specific language governing permissions and limitations
under the License.
.. _api-python-tir:

tvm.tir
-------
.. automodule:: tvm.tir
Expand Down
10 changes: 4 additions & 6 deletions docs/topic/vta/install.rst
Original file line number Diff line number Diff line change
Expand Up @@ -30,8 +30,8 @@ We present three installation guides, each extending on the previous one:
VTA Simulator Installation
--------------------------

You need `TVM installed <https://tvm.apache.org/docs/install/index.html>`_ on your machine.
For a quick and easy start, checkout the `Docker Guide <https://tvm.apache.org/docs/install/docker.html>`_.
You need :ref:`TVM installed <installation>` on your machine. For a quick and
easy start, checkout the :ref:`Docker Guide <docker-images>`.

You'll need to set the following paths to use VTA:

Expand Down Expand Up @@ -65,7 +65,7 @@ To ensure that you've properly installed the VTA python package, run the followi
python <tvm root>/vta/tests/python/integration/test_benchmark_topi_conv2d.py
You are invited to try out our `VTA programming tutorials <https://tvm.apache.org/docs/vta/tutorials/index.html>`_.
You are invited to try out our :ref:`VTA programming tutorials <vta-tutorials>`.

**Note**: You'll notice that for every convolution layer, the throughput gets reported in GOPS. These numbers are actually the computational throughput that the simulator achieves, by evaluating the convolutions in software.

Expand Down Expand Up @@ -222,9 +222,7 @@ The performance metrics measured on the Pynq board will be reported for each con

**Tip**: You can track progress of the FPGA programming and the runtime rebuilding steps by looking at the RPC server's logging messages in your Pynq ``ssh`` session.

You can also try out our `VTA programming tutorials <https://tvm.apache.org/docs/vta/tutorials/index.html>`_.


You can also try out our :ref:`VTA programming tutorials <vta-tutorials>`.

Intel DE10 FPGA Setup
---------------------
Expand Down
10 changes: 5 additions & 5 deletions gallery/how_to/deploy_models/deploy_prequantized_tflite.py
Original file line number Diff line number Diff line change
Expand Up @@ -255,8 +255,8 @@ def run_tvm(lib):
# * Set the environment variable TVM_NUM_THREADS to the number of physical cores
# * Choose the best target for your hardware, such as "llvm -mcpu=skylake-avx512" or
# "llvm -mcpu=cascadelake" (more CPUs with AVX512 would come in the future)
# * Perform autotuning - `Auto-tuning a convolution network for x86 CPU
# <https://tvm.apache.org/docs/tutorials/autotvm/tune_relay_x86.html>`_.
# * To get best inference performance on ARM CPU, change target argument according to your
# device and follow `Auto-tuning a convolution network for ARM CPU
# <https://tvm.apache.org/docs/tutorials/autotvm/tune_relay_arm.html>`_.
# * Perform autotuning - :ref:`Auto-tuning a convolution network for x86 CPU
# <tune_relay_x86>`.
# * To get best inference performance on ARM CPU, change target argument
# according to your device and follow :ref:`Auto-tuning a convolution
# network for ARM CPU <tune_relay_arm>`.
2 changes: 2 additions & 0 deletions gallery/how_to/work_with_schedules/schedule_primitives.py
Original file line number Diff line number Diff line change
Expand Up @@ -15,6 +15,8 @@
# specific language governing permissions and limitations
# under the License.
"""
.. _schedule_primitives:
Schedule Primitives in TVM
==========================
**Author**: `Ziheng Jiang <https://github.com/ZihengJiang>`_
Expand Down
12 changes: 5 additions & 7 deletions gallery/tutorial/autotvm_relay_x86.py
Original file line number Diff line number Diff line change
Expand Up @@ -81,10 +81,9 @@
#
# .. note:: Working with Other Model Formats
#
# TVM supports many popular model formats. A list can be found in the `Compile
# Deep Learning Models
# <https://tvm.apache.org/docs/tutorials/index.html#compile-deep-learning-models>`_
# section of the TVM Documentation.
# TVM supports many popular model formats. A list can be found in the
# :ref:`Compile Deep Learning Models <tutorial-frontend>` section of the TVM
# Documentation.

model_url = "".join(
[
Expand Down Expand Up @@ -150,9 +149,8 @@
#
# Specifying the correct target can have a huge impact on the performance of
# the compiled module, as it can take advantage of hardware features
# available on the target. For more information, please refer to `Auto-tuning
# a convolutional network for x86 CPU
# <https://tvm.apache.org/docs/tutorials/autotvm/tune_relay_x86.html#define-network>`_.
# available on the target. For more information, please refer to
# :ref:`Auto-tuning a convolutional network for x86 CPU <tune_relay_x86>`.
# We recommend identifying which CPU you are running, along with optional
# features, and set the target appropriately. For example, for some
# processors ``target = "llvm -mcpu=skylake"``, or ``target = "llvm
Expand Down
4 changes: 2 additions & 2 deletions gallery/tutorial/install.py
Original file line number Diff line number Diff line change
Expand Up @@ -35,8 +35,8 @@
# allow you to enable specific features such as GPU support, microcontroller
# support (microTVM), and a debugging runtime, and other features. You will also
# want to install from source if you want to actively contribute to the TVM
# project. The full instructions are on the `Install TVM From Source
# <https://tvm.apache.org/docs/install/from_source.html>`_ page.
# project. The full instructions are on the :ref:`Install TVM From Source
# <install-from-source>` page.

################################################################################
# Installing From Binary Packages
Expand Down
2 changes: 2 additions & 0 deletions gallery/tutorial/intro_topi.py
Original file line number Diff line number Diff line change
Expand Up @@ -15,6 +15,8 @@
# specific language governing permissions and limitations
# under the License.
"""
.. _tutorial-topi:
Introduction to TOPI
====================
**Author**: `Ehsan M. Kermani <https://github.com/ehsanmok>`_
Expand Down
2 changes: 1 addition & 1 deletion gallery/tutorial/tensor_expr_get_started.py
Original file line number Diff line number Diff line change
Expand Up @@ -512,7 +512,7 @@ def evaluate_addition(func, target, optimization, log):
# before it moves on to the next stage.
#
# A complete description of these primitives can be found in the
# [Schedule Primitives](https://tvm.apache.org/docs/tutorials/language/schedule_primitives.html) docs page.
# :ref:`Schedule Primitives <schedule_primitives>` docs page.

################################################################################
# Example 2: Manually Optimizing Matrix Multiplication with TE
Expand Down
8 changes: 3 additions & 5 deletions gallery/tutorial/tvmc_command_line_driver.py
Original file line number Diff line number Diff line change
Expand Up @@ -154,11 +154,9 @@
# Specifying the correct target (option ``--target``) can have a huge
# impact on the performance of the compiled module, as it can take
# advantage of hardware features available on the target. For more
# information, please refer to `Auto-tuning a convolutional network
# for x86 CPU <https://tvm.apache.org/docs/tutorials/autotvm/tune_relay_x86.html#define-network>`_.
# We recommend identifying which CPU you are running, along with optional features,
# and set the target appropriately.
#
# information, please refer to :ref:`Auto-tuning a convolutional network for
# x86 CPU <tune_relay_x86>`. We recommend identifying which CPU you are
# running, along with optional features, and set the target appropriately.

################################################################################
# Running the Model from The Compiled Module with TVMC
Expand Down
2 changes: 2 additions & 0 deletions vta/tutorials/README.txt
Original file line number Diff line number Diff line change
@@ -1,3 +1,5 @@
.. _vta-tutorials:

VTA Tutorials
=============
This page contains tutorials about VTA and how to use TVM/Relay to target VTA.

0 comments on commit 0147b04

Please sign in to comment.