Skip to content

Commit

Permalink
[DOC] Document update (#329)
Browse files Browse the repository at this point in the history
  • Loading branch information
tqchen authored Aug 15, 2017
1 parent 07e56b9 commit ce18b56
Show file tree
Hide file tree
Showing 6 changed files with 65 additions and 25 deletions.
20 changes: 11 additions & 9 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,23 +4,25 @@
[Installation](docs/how_to/install.md) |
[Documentation](http://docs.tvmlang.org) |
[Tutorials](http://tutorials.tvmlang.org) |
[Operator Inventory](topi) |
[FAQ](docs/faq.md) |
[Contributors](CONTRIBUTORS.md) |
[Release Notes](NEWS.md)

TVM
===
TVM: Tensor IR Stack for Deep Learning Systems
==============================================
TVM is a Tensor intermediate representation(IR) stack for deep learning systems. It is designed to close the gap between the
productivity-focused deep learning frameworks, and the performance- and efficiency-focused hardware backends.
TVM works with deep learning frameworks to provide end to end compilation to different backends.

TVM is a low level domain specific language(DSL) for compiling tensor computation pipelines.
It is designed to compile multi-dimensional tensor algebra pipelines which
are crucial to deep learning frameworks.
License
-------
© Contributors, 2017. Licensed under an [Apache-2.0](https://github.com/dmlc/tvm/blob/master/LICENSE) license.

Contribute to TVM
-----------------
Your help is very valuable to make the package better for everyone.
TVM adopts apache committer model, we aim to create an open source project that is maintained and owned by the community.

- [Contributor Guide](docs/how_to/contribute.md)
- Please add your name to [CONTRIBUTORS.md](CONTRIBUTORS.md)
- Please also update [NEWS.md](NEWS.md) on changes and improvements in API and codes.

## Documentation
The current documentation can be build locally via sphinx. See [docs](docs) folder for details.
28 changes: 26 additions & 2 deletions docs/api/python/topi.rst
Original file line number Diff line number Diff line change
Expand Up @@ -13,16 +13,28 @@ Index
topi.tanh
topi.log
topi.sqrt
topi.sigmoid
topi.broadcast_to
topi.max
topi.sum
topi.min
topi.nn.relu
topi.nn.dilate
topi.nn.scale_shift
topi.nn.conv2d_nchw
topi.nn.conv2d_hwcn
topi.nn.depthwise_conv2d


**List of schedules**

.. autosummary::

topi.cuda.schedule_depthwise_conv2d_map
topi.cuda.schedule_conv2d_nchw
topi.cuda.schedule_conv2d_hwcn
topi.cuda.schedule_depthwise_conv2d
topi.cuda.schedule_reduce
topi.cuda.schedule_broadcast_to


topi
Expand All @@ -31,15 +43,27 @@ topi
.. autofunction:: topi.tanh
.. autofunction:: topi.log
.. autofunction:: topi.sqrt
.. autofunction:: topi.sigmoid
.. autofunction:: topi.broadcast_to
.. autofunction:: topi.max
.. autofunction:: topi.sum
.. autofunction:: topi.min

topi.nn
~~~~~~~
.. autofunction:: topi.nn.relu
.. autofunction:: topi.nn.dilate
.. autofunction:: topi.nn.scale_shift
.. autofunction:: topi.nn.conv2d_nchw
.. autofunction:: topi.nn.conv2d_hwcn
.. autofunction:: topi.nn.depthwise_conv2d

topi.cuda
~~~~~~~~~
.. automodule:: topi.cuda

.. autofunction:: topi.cuda.schedule_depthwise_conv2d_map
.. autofunction:: topi.cuda.schedule_conv2d_nchw
.. autofunction:: topi.cuda.schedule_conv2d_hwcn
.. autofunction:: topi.cuda.schedule_depthwise_conv2d
.. autofunction:: topi.cuda.schedule_reduce
.. autofunction:: topi.cuda.schedule_broadcast_to
26 changes: 18 additions & 8 deletions docs/faq.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,17 +6,27 @@ How to Install
--------------
See [Installation](https://github.com/dmlc/tvm/blob/master/docs/how_to/install.md)

TVM's relation to XLA
---------------------
They has different abstraction level.
XLA is a higher level tensor algebra DSL, the system defines codegen and loop transformation
rules for each kernels. TVM is an low level array index based DSL that give the loop transformation
primitives to the user. In terms of design philosophy, TVM aims to be directly used by developers
and provide general support for different framework via DLPack.
See also [This Issue](https://github.com/dmlc/tvm/issues/151)
TVM's relation to Other IR/DSL Projects
---------------------------------------
There are usually two levels of abstractions of IR in the deep learning systems.
NNVM, TensorFlow's XLA and Intel's ngraph uses computation graph representation.
This representation is high level, and can be helpful to perform generic optimizations
such as memory reuse, layout transformation and automatic differentiation.

TVM adopts a low level representation, that explicitly express the choice of memory
layout, parallelization pattern, locality and hardware primtives etc.
This level of IR is closer to directly target hardwares.
The low level IR adopt ideas from existing image processing languages like Halide, darkroom
and loop transformation tools like loopy and polyhedra based analysis.
We specifically focus of expressing deep learning workloads(e.g. recurrence),
optimization for different hardware backends and embedding with frameworks to provide
end-to-end compilation stack.


TVM's relation to libDNN cuDNN
------------------------------
TVM can incorporate these library as external calls. One goal of TVM is to be able to
generate high performing kernels. We will evolve TVM an incremental manner as
we learn from the technics of manual kernel crafting and add these as primitives in DSL.
See also [TVM Operator Inventory](https://github.com/dmlc/tvm/tree/master/topi) for
recipes of operators in TVM.
2 changes: 1 addition & 1 deletion docs/how_to/install.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ scratch on various systems. It consists of two steps:

To get started, clone tvm repo from github. It is important to clone the submodules along, with ```--recursive``` option.
```bash
git clone --recursive ssh://git@github.com/dmlc/tvm
git clone --recursive https://github.com/dmlc/tvm
```
For windows users who use github tools, you can open the git shell, and type the following command.
```bash
Expand Down
7 changes: 4 additions & 3 deletions topi/README.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
# TVM Operator Inventory
# TOPI: TVM Operator Inventory

topi is the operator collection library for TVM intended at sharing the effort of crafting and
optimizing tvm generated kernels. The goal:
TOPI is the operator collection library for TVM intended at sharing the effort of crafting
and optimizing tvm generated kernels. The goal:

- Provide sugars for operator declaration
- Give common primitives for fused op creation.
Expand All @@ -21,6 +21,7 @@ optimizing tvm generated kernels. The goal:
- Some kernels have requirements on shape and data layout, assert them
- Data layout aware, if not specified in argument or in function, assume NCHW by default.


## Testcase
- Add testcases to testout the schedule and dataflow in the TOPI workflow
- Only do correctness testing without attaching compiler flags and only run it once.
Expand Down
7 changes: 5 additions & 2 deletions topi/python/topi/__init__.py
Original file line number Diff line number Diff line change
@@ -1,8 +1,11 @@
# pylint: disable=redefined-builtin, wildcard-import
"""TVM Operator Inventory.
TOPI is the operator collection library for TVM intended at sharing the effort of crafting and
optimizing tvm generated kernels.
TOPI is the operator collection library for TVM, to provide sugars
for constructing compute declaration as well as optimized schedules.
Some of the schedule function may have been specially optimized for a
specific workload.
"""
from __future__ import absolute_import as _abs

Expand Down

0 comments on commit ce18b56

Please sign in to comment.