Skip to content

Commit

Permalink
[DOCS] Fix sphinx precheck (apache#4967)
Browse files Browse the repository at this point in the history
* [DOCS] Fix sphinx precheck

* ignore keras warnings

* Remove more warnings
  • Loading branch information
tqchen authored and Trevor Morris committed Apr 16, 2020
1 parent 5c7a716 commit 1bbae08
Show file tree
Hide file tree
Showing 4 changed files with 64 additions and 63 deletions.
41 changes: 21 additions & 20 deletions docs/langref/relay_adt.rst
Original file line number Diff line number Diff line change
@@ -1,3 +1,4 @@

.. Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements. See the NOTICE file
distributed with this work for additional information
Expand Down Expand Up @@ -63,7 +64,7 @@ Hence, it is often easy to reason about ADTs.
Below is a simple example of defining an ADT and using it in a function
via a match expression:

.. code-block:: python
.. code-block::
# Defines an ADT named "Numbers"
data Numbers {
Expand Down Expand Up @@ -94,7 +95,7 @@ meaning that two ADTs with structurally identical constructors
will nevertheless be distinct data types from the point of view of
the typechecker.

.. code-block:: python
.. code-block::
# structurally identical constructors to Numbers
data Numbers2 {
Expand All @@ -117,7 +118,7 @@ can be polymorphic and take type parameters.
For example, one of the standard ADTs commonly used in functional
programming languages is the optional type, defined here:

.. code-block:: python
.. code-block::
# a is a type parameter
data Optional<a> {
Expand All @@ -141,7 +142,7 @@ imply, an ADT instance is thus given a type that contains the
concrete type arguments for that instance, ensuring the information is
kept around. Let the below example illustrate:

.. code-block:: python
.. code-block::
# the signature for option indicates the type argument
def @inc_scalar(%opt : Optional[Tensor[(), int32]]) -> Tensor[(), int32] {
Expand Down Expand Up @@ -198,7 +199,7 @@ Many commonly used ADTs involve recursion; some of these are given
in `Common ADT Uses`_. As an example here, we will
examine the list ADT, ubiquitous in functional languages:

.. code-block:: python
.. code-block::
data List<a> {
Nil : () -> List
Expand All @@ -216,7 +217,7 @@ end of the list is reached, which can be indicated with a :code:`Nil`
Lists represented in this manner can easily be recursively processed.
For example, the following function sums a list of integers:

.. code-block:: python
.. code-block::
def @list_sum(%l : List[Tensor[(), int32]]) -> Tensor[(), int32] {
match(%l) {
Expand Down Expand Up @@ -250,7 +251,7 @@ and the second has a :code:`Cons` constructor pattern that uses variable pattern

The below example uses a wildcard pattern to ignore one of the arguments to :code:`Cons`:

.. code-block:: python
.. code-block::
def @first<a>(%l : List[a]) -> Optional[a] {
match(%l) {
Expand All @@ -262,7 +263,7 @@ The below example uses a wildcard pattern to ignore one of the arguments to :cod
Here, a constructor pattern is nested inside another constructor pattern to avoid nested match expressions for a list option.
A top-level wildcard pattern is also used to handle all cases that do not match the first clause:

.. code-block:: python
.. code-block::
def @second_opt<a>(%ll : Optional[List[a]]) -> Optional[a] {
match(%ll) {
Expand All @@ -281,7 +282,7 @@ Note that a match expression checks its patterns in the order the cases are list
that matches the input value is the one that is evaluated. Here, a top-level variable pattern binds the whole
input value:

.. code-block:: python
.. code-block::
def @match_order_beware<a>(%l : List[a]) -> List[a] {
match(%l) {
Expand All @@ -291,7 +292,7 @@ input value:
case Nil() { Nil() }
}
}
Common ADT Uses
===============

Expand All @@ -312,7 +313,7 @@ list comprehensions and certain library functions in Python. Below are very comm
through lists, which are included in Relay's Prelude. (These have all been extensively characterized
in the functional programming literature, and we do not attempt to reproduce that work in this document.)

.. code-block:: python
.. code-block::
# Map: for [h1, h2, ..., hn] returns [f(h1), f(h2), ..., f(hn)]
def @map<a, b>(%f : fn(a) -> b, %l : List[a]) -> List[b] {
Expand Down Expand Up @@ -341,7 +342,7 @@ in the functional programming literature, and we do not attempt to reproduce tha
Using these iteration constructs, many common operations over lists can be expressed compactly.
For example, the following map doubles all members of a list:

.. code-block:: python
.. code-block::
# directly written
def @double(%l : List[Tensor[(), int32]]) -> List[Tensor[(), int32]] {
Expand All @@ -356,7 +357,7 @@ For example, the following map doubles all members of a list:
The following right fold concatenates two lists:

.. code-block:: python
.. code-block::
# directly written
def @concat<a>(%l1 : List[a], %l2 : List[a]) -> List[a] {
Expand All @@ -371,7 +372,7 @@ The following right fold concatenates two lists:
The following left fold flattens a list of lists (using concatenation):

.. code-block:: python
.. code-block::
# directly written
def @flatten<a>(%ll : List[List[a]]) -> List[a] {
Expand Down Expand Up @@ -401,13 +402,13 @@ First let us suppose that we have a function corresponding to a trained recurren
cell, which takes in a past state and an input value and returns a new state and output value. In
Relay, this would have the following signature:

.. code-block:: python
.. code-block::
@cell : fn<state_type, in_type, out_type>(state_type, in_type) -> (state_type, out_type)
We might consider a ReLU cell as a simple concrete example, with a trained version below:

.. code-block:: python
.. code-block::
def @linear(%x, %w, %b) { %w*%x + %b }
Expand All @@ -429,7 +430,7 @@ We might consider a ReLU cell as a simple concrete example, with a trained versi
Following Olah's example, we can encode a sequence (list) of inputs with the following left fold:

.. code-block:: python
.. code-block::
def @encode<state_type, in_type, out_type>(%cell, %input : List[in_type], %init : state_type) -> state_type {
# not using the output
Expand All @@ -439,7 +440,7 @@ Following Olah's example, we can encode a sequence (list) of inputs with the fol
Using an *unfold* iterator (from Haskell's standard library), the same cell could be used to make
a generator network (which takes a single input and produces a sequence of outputs):

.. code-block:: python
.. code-block::
# included in Relay's Prelude
def @unfoldr<a, b>(%f : fn(b) -> Optional[(a, b)], %z : b) -> List[a] {
Expand Down Expand Up @@ -468,7 +469,7 @@ a generator network (which takes a single input and produces a sequence of outpu
An accumulating map (a fold that simultaneously updates an accumulator value and a list
of outputs) can be used to write a general RNN (with an output for every input):

.. code-block:: python
.. code-block::
def @map_accumr<a, b, c>(%f : fn(a, b) -> (a, c), %acc : a, %l : List[b]) -> (a, List[c]) {
match(%l) {
Expand Down Expand Up @@ -500,7 +501,7 @@ Olah also gives an example of a bidirectional neural network, in which two sets
cells (which may have different weights) process the input in both directions and produce a
single set of outputs. The following is a Relay implementation of that example:

.. code-block:: python
.. code-block::
# creates a list of tuples from two lists
# included in Relay's Prelude
Expand Down
Loading

0 comments on commit 1bbae08

Please sign in to comment.