Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Docs] Prevented docs/1 file from being generated. #8029

Merged
merged 6 commits into from
Jun 24, 2021
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions docs/api/python/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -18,6 +18,7 @@
Python API
==========


.. toctree::
:maxdepth: 2

Expand Down
1 change: 1 addition & 0 deletions docs/api/python/relay/image.rst
Original file line number Diff line number Diff line change
Expand Up @@ -22,4 +22,5 @@ tvm.relay.image
.. automodule:: tvm.relay.image
:members:
:imported-members:
:exclude-members: Expr, Constant
:autosummary:
1 change: 1 addition & 0 deletions docs/api/python/relay/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -26,4 +26,5 @@ tvm.relay
TypeVar, GlobalTypeVar, TypeConstraint, FuncType, TupleType, IncompleteType,
TypeCall, TypeRelation, TensorType, RelayRefType, GlobalVar, SourceName,
Span, Var, Op, Constructor
:noindex: TypeData
:autosummary:
1 change: 1 addition & 0 deletions docs/api/python/tir.rst
Original file line number Diff line number Diff line change
Expand Up @@ -37,6 +37,7 @@ tvm.tir.analysis
.. automodule:: tvm.tir.analysis
:members:
:imported-members:
:noindex: Buffer, Stmt
:autosummary:


Expand Down
1 change: 1 addition & 0 deletions docs/api/python/topi.rst
Original file line number Diff line number Diff line change
Expand Up @@ -20,6 +20,7 @@ tvm.topi
.. automodule:: tvm.topi
:members:
:imported-members:
:noindex: AssertStmt
:autosummary:

tvm.topi.nn
Expand Down
1 change: 1 addition & 0 deletions docs/dev/device_target_interactions.rst
Original file line number Diff line number Diff line change
Expand Up @@ -15,6 +15,7 @@
specific language governing permissions and limitations
under the License.


.. _tvm-target-specific-overview:

Device/Target Interactions
Expand Down
11 changes: 11 additions & 0 deletions docs/dev/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -29,6 +29,10 @@ This page is organized as follows:
The sections after are specific guides focused on each logical component, organized
by the component's name.

- The :ref:`Device/Target Interactions <tvm-target-specific-overview>`
page describes how TVM interacts with each supported physical device
and code-generation target.

- Feel free to also check out the :ref:`dev-how-to` for useful development tips.

This guide provides a few complementary views of the architecture.
Expand Down Expand Up @@ -244,11 +248,18 @@ for learning-based optimizations.
:maxdepth: 1

runtime


.. toctree::
:maxdepth: 1

debugger
virtual_machine
introduction_to_module_serialization
device_target_interactions



tvm/node
--------
The node module adds additional features on top of the `runtime::Object` for IR data structures.
Expand Down
2 changes: 1 addition & 1 deletion python/tvm/auto_scheduler/compute_dag.py
Original file line number Diff line number Diff line change
Expand Up @@ -96,7 +96,7 @@ class ComputeDAG(Object):

Parameters
----------
compute : Union[List[Tensor], str, Schedule]
compute : Union[List[Tensor], str, tvm.te.Schedule]
Input/output tensors or workload key for a compute declaration.
"""

Expand Down
12 changes: 6 additions & 6 deletions python/tvm/driver/build_module.py
Original file line number Diff line number Diff line change
Expand Up @@ -98,17 +98,17 @@ def lower(

Parameters
----------
inputs : Union[schedule.Schedule, PrimFunc, IRModule]
inp : Union[tvm.te.schedule.Schedule, tvm.tir.PrimFunc, IRModule]
The TE schedule or TensorIR PrimFunc/IRModule to be built

args : Optional[List[Union[Buffer, tensor.Tensor, Var]]]
args : Optional[List[Union[tvm.tir.Buffer, tensor.Tensor, Var]]]
The argument lists to the function for TE schedule.
It should be None if we want to lower TensorIR.

name : str
The name of result function.

binds : Optional[Mapping[tensor.Tensor, Buffer]]
binds : Optional[Mapping[tensor.Tensor, tvm.tir.Buffer]]
Dictionary that maps the Tensor to Buffer which specified the data layout
requirement of the function. By default, a new compact buffer is created
for each tensor in the argument.
Expand Down Expand Up @@ -233,10 +233,10 @@ def build(

Parameters
----------
inputs : Union[schedule.Schedule, PrimFunc, IRModule, Mapping[str, IRModule]]
inputs : Union[tvm.te.schedule.Schedule, tvm.tir.PrimFunc, IRModule, Mapping[str, IRModule]]
The input to be built

args : Optional[List[Union[Buffer, tensor.Tensor, Var]]]
args : Optional[List[Union[tvm.tir.Buffer, tensor.Tensor, Var]]]
The argument lists to the function.

target : Optional[Union[str, Target]]
Expand All @@ -254,7 +254,7 @@ def build(
name : Optional[str]
The name of result function.

binds : Optional[Mapping[tensor.Tensor, Buffer]]
binds : Optional[Mapping[tensor.Tensor, tvm.tir.Buffer]]
Dictionary that maps the binding of symbolic buffer to Tensor.
By default, a new buffer is created for each tensor in the argument.

Expand Down
18 changes: 12 additions & 6 deletions python/tvm/ir/op.py
Original file line number Diff line number Diff line change
Expand Up @@ -96,14 +96,20 @@ def add_type_rel(self, rel_name, type_rel_func=None):
type_rel_func : Optional[function (args: List[Type], attrs: Attrs) -> Type]
The backing relation function which can solve an arbitrary relation on variables.
Differences with type_rel_func in C++:
1, when type_rel_func is not None:
1) OpAddTypeRel on C++ side will adjust type_rel_func with TypeReporter to

1) When type_rel_func is not None

a) OpAddTypeRel on C++ side will adjust type_rel_func with TypeReporter to
calling convention of relay type system.
2) type_rel_func returns output argument's type, return None means can't

b) type_rel_func returns output argument's type, return None means can't
infer output's type.
3) only support single output operators for now, the last argument is output tensor.
2, when type_rel_func is None, will call predefined type_rel_funcs in relay
accorrding to `tvm.relay.type_relation.` + rel_name.

c) only support single output operators for now, the last argument is output tensor.

2) when type_rel_func is None, will call predefined type_rel_funcs in relay
according to ``tvm.relay.type_relation.`` + rel_name.

"""
_ffi_api.OpAddTypeRel(self, rel_name, type_rel_func)

Expand Down
11 changes: 6 additions & 5 deletions python/tvm/micro/build.py
Original file line number Diff line number Diff line change
Expand Up @@ -158,11 +158,12 @@ def default_options(crt_config_include_dir, standalone_crt_dir=None):
Dict :
A dictionary containing 3 subkeys, each whose value is _build_default_compiler_options()
plus additional customization.
- "bin_opts" - passed as "options" to Compiler.binary() when building MicroBinary.
- "lib_opts" - passed as "options" to Compiler.library() when building bundled CRT
libraries (or otherwise, non-generated libraries).
- "generated_lib_opts" - passed as "options" to Compiler.library() when building the
generated library.

- "bin_opts" - passed as "options" to Compiler.binary() when building MicroBinary.
- "lib_opts" - passed as "options" to Compiler.library() when building bundled CRT
libraries (or otherwise, non-generated libraries).
- "generated_lib_opts" - passed as "options" to Compiler.library() when building the
generated library.
"""
bin_opts = _build_default_compiler_options(standalone_crt_dir)
bin_opts["include_dirs"].append(crt_config_include_dir)
Expand Down
16 changes: 12 additions & 4 deletions python/tvm/relay/op/transform.py
Original file line number Diff line number Diff line change
Expand Up @@ -1373,25 +1373,32 @@ def sparse_fill_empty_rows(sparse_indices, sparse_values, dense_shape, default_v
Fill rows in a sparse matrix that do no contain any values. Values are placed in the first
column of empty rows. The sparse array is in COO format.
It returns a TupleWrapper with 3 outputs

Parameters
----------
sparse_indices : relay.Expr
A 2-D tensor[N, ndims] of integers containing location of sparse values, where N is
the number of sparse values and n_dim is the number of dimensions of the dense_shape.
The first column of this relay parameter must be sorted in ascending order.

sparse_values : relay.Expr
A 1-D tensor[N] containing the sparse values for the sparse indices.

dense_shape : relay.Expr
A 1-D tensor[ndims] which contains shape of the dense output tensor.

default_value : relay.Expr
A 1-D tensor[1] containing the default value for the remaining locations.

Returns
-------
new_sparse_indices : relay.Expr
A 2-D tensor[?, ndims] of integers containing location of new sparse
indices. The first column outputs must be sorted in ascending order.

new_sparse_values : relay.Expr
A 1-D tensor[?] containing the sparse values for the sparse indices.

empty_row_indicator : relay.Expr
A 1-D tensor[dense_shape[0]] filled with zeros and ones
indicating whether the particular row is empty or full respectively
Expand Down Expand Up @@ -1702,18 +1709,18 @@ def unique(data, is_sorted=True, return_counts=False):
.. code-block:: python

[output, indices, num_unique] = unique([4, 5, 1, 2, 3, 3, 4, 5], False, False)
output = [4, 5, 1, 2, 3, ?, ?, ?]
output = [4, 5, 1, 2, 3, _, _, _]
indices = [0, 1, 2, 3, 4, 4, 0, 1]
num_unique = [5]

[output, indices, num_unique, counts] = unique([4, 5, 1, 2, 3, 3, 4, 5], False, True)
output = [4, 5, 1, 2, 3, ?, ?, ?]
output = [4, 5, 1, 2, 3, _, _, _]
indices = [0, 1, 2, 3, 4, 4, 0, 1]
num_unique = [5]
counts = [2, 2, 1, 1, 2, ?, ?, ?]
counts = [2, 2, 1, 1, 2, _, _, _]

[output, indices, num_unique] = unique([4, 5, 1, 2, 3, 3, 4, 5], True)
output = [1, 2, 3, 4, 5, ?, ?, ?]
output = [1, 2, 3, 4, 5, _, _, _]
indices = [3, 4, 0, 1, 2, 2, 3, 4]
num_unique = [5]
"""
Expand Down Expand Up @@ -1744,6 +1751,7 @@ def invert_permutation(data):
Examples
--------
.. code-block:: python

data = [3, 4, 0, 2, 1]
relay.invert_permutation(data) = [2, 4, 3, 0, 1]
"""
Expand Down
26 changes: 14 additions & 12 deletions python/tvm/relay/transform/transform.py
Original file line number Diff line number Diff line change
Expand Up @@ -1177,18 +1177,20 @@ def FakeQuantizationToInteger():
"""
Find regions of the graph of the form

x w
| |
dq dq
\ /
op1
|
op2
|
q

where q == qnn.quantize and dq = qnn.dequantize
and rewrite them into integer versions of op1 and op2
.. code-block:: text

x w
| |
dq dq
\ /
op1
|
op2
|
q

where ``q == qnn.quantize`` and ``dq = qnn.dequantize``
and rewrite them into integer versions of ``op1`` and ``op2``

Rules for rewriting indivdual ops are in fake_quantization_to_integer.py

Expand Down
1 change: 1 addition & 0 deletions python/tvm/runtime/ndarray.py
Original file line number Diff line number Diff line change
Expand Up @@ -390,6 +390,7 @@ def gpu(dev_id=0):

deprecated:: 0.9.0
Use :py:func:`tvm.cuda` instead.

Parameters
----------
dev_id : int, optional
Expand Down
2 changes: 1 addition & 1 deletion python/tvm/runtime/profiling.py
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@
class Report(Object):
"""A container for information gathered during a profiling run.

Fields
Attributes
----------
calls : Array[Dict[str, Object]]
Per-call profiling metrics (function name, runtime, device, ...).
Expand Down
2 changes: 1 addition & 1 deletion python/tvm/te/hybrid/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -74,7 +74,7 @@ def build(sch, inputs, outputs, name="hybrid_func"):

Parameters
----------
sch: Schedule
sch: tvm.te.Schedule
The schedule to be dumped

inputs: An array of Tensors or Vars
Expand Down
10 changes: 5 additions & 5 deletions python/tvm/te/operation.py
Original file line number Diff line number Diff line change
Expand Up @@ -226,12 +226,12 @@ def extern(
.. note::
**Parameters**

- **ins** (list of :any:`Buffer`) - Placeholder for each inputs
- **outs** (list of :any:`Buffer`) - Placeholder for each outputs
- **ins** (list of :any:`tvm.tir.Buffer`) - Placeholder for each inputs
- **outs** (list of :any:`tvm.tir.Buffer`) - Placeholder for each outputs

**Returns**

- **stmt** (:any:`Stmt`) - The statement that carries out array computation.
- **stmt** (:any:`tvm.tir.Stmt`) - The statement that carries out array computation.

name: str, optional
The name hint of the tensor
Expand All @@ -240,10 +240,10 @@ def extern(
The data types of outputs,
by default dtype will be same as inputs.

in_buffers: Buffer or list of Buffer, optional
in_buffers: tvm.tir.Buffer or list of tvm.tir.Buffer, optional
Input buffers.

out_buffers: Buffer or list of Buffers, optional
out_buffers: tvm.tir.Buffer or list of tvm.tir.Buffer, optional
Output buffers.


Expand Down
8 changes: 4 additions & 4 deletions python/tvm/te/tensor_intrin.py
Original file line number Diff line number Diff line change
Expand Up @@ -82,20 +82,20 @@ def decl_tensor_intrin(
.. note::
**Parameters**

- **ins** (list of :any:`Buffer`) - Placeholder for each inputs
- **outs** (list of :any:`Buffer`) - Placeholder for each outputs
- **ins** (list of :any:`tvm.tir.Buffer`) - Placeholder for each inputs
- **outs** (list of :any:`tvm.tir.Buffer`) - Placeholder for each outputs

**Returns**

- **stmt** (:any:`Stmt`, or tuple of three stmts)
- **stmt** (:any:`tvm.tir.Stmt`, or tuple of three stmts)
- If a single stmt is returned, it represents the body
- If tuple of three stmts are returned they corresponds to body,
reduce_init, reduce_update

name: str, optional
The name of the intrinsic.

binds: dict of :any:`Tensor` to :any:`Buffer`, optional
binds: dict of :any:`Tensor` to :any:`tvm.tir.Buffer`, optional
Dictionary that maps the Tensor to Buffer which specified the data layout
requirement of the function. By default, a new compact buffer is created
for each tensor in the argument.
Expand Down
2 changes: 1 addition & 1 deletion python/tvm/tir/buffer.py
Original file line number Diff line number Diff line change
Expand Up @@ -198,7 +198,7 @@ def decl_buffer(

Returns
-------
buffer : Buffer
buffer : tvm.tir.Buffer
The created buffer

Example
Expand Down
17 changes: 10 additions & 7 deletions python/tvm/tir/schedule/block_scope.py
Original file line number Diff line number Diff line change
Expand Up @@ -109,15 +109,18 @@ class Dependency(Object):

@register_object("tir.BlockScope")
class BlockScope(Object):
"""An object corresponds to each block sref in the sref tree,
which tracks the producer-consumer dependency between blocks.
"""An object corresponds to each block sref in the sref tree, which
tracks the producer-consumer dependency between blocks.

Glossary:
- Block scope: A contiguous subtree of the sref tree, rooted at each block sref,
whose components are:
- scope root: a block sref
- internal srefs: loop srefs
- scope leaves: block srefs

- Block scope: A contiguous subtree of the sref tree, rooted at
each block sref, whose components are:

- scope root: a block sref
- internal srefs: loop srefs
- scope leaves: block srefs

- Child block: The scope leaf blocks under the scope root or a specific internal sref
"""

Expand Down
Loading