Skip to content

Commit

Permalink
add document
Browse files Browse the repository at this point in the history
lint

lint

save

save

add more case

save

error

lint

lint

commit

do

lint

save

fix lint

wrap it back as func

lint

save

remove dead comment

fix style

fix lint

Update src/relay/pass/partial_eval.cc

Co-Authored-By: MarisaKirisame <lolisa@marisa.moe>

Update src/relay/pass/partial_eval.cc

Co-Authored-By: MarisaKirisame <lolisa@marisa.moe>

Update src/relay/pass/partial_eval.cc

Co-Authored-By: MarisaKirisame <lolisa@marisa.moe>

Update src/relay/pass/partial_eval.cc

Co-Authored-By: MarisaKirisame <lolisa@marisa.moe>

Update src/relay/pass/partial_eval.cc

Co-Authored-By: MarisaKirisame <lolisa@marisa.moe>

Update src/relay/pass/partial_eval.cc

Co-Authored-By: MarisaKirisame <lolisa@marisa.moe>

address review feedback

pe now handle freevar. as a result preserving function is now trivial.

test

add basic test, implement pretty printing for generic function

test

lint

fix segfault

save

save

do

test

fix another error

address comment

commit

save

address review feedback

add test for invalidate, fix error in lookup

rename cont to boduy

fix error and add regression test

Update src/relay/pass/partial_eval.cc

Co-Authored-By: MarisaKirisame <lolisa@marisa.moe>

fix error, add test case

fix lint

remove extra line

fix some error

pe

commit

save

save

save

save

save (pe/dce broken)

[DOCKER] Pin flatbuffers checkout to the last release tag (apache#2823). (apache#2879)

[Relay][Text Format] Reverse CallNode Print Order (apache#2882)

[NNPACK] Modernize test (apache#2868)

[Relay] Add list update to prelude (apache#2866)

Add missing sgx includes (apache#2878)

Fix setting up hints for getaddrinfo (apache#2872)

[ARITH] RewriteSimplifier: improved cmp simplification (apache#2851)

do (apache#2883)

[RELAY][Frontend][TF] decompile tf control flow (apache#2830)

* decompile tf control flow

* Add docs

* remove import relay

* move tests under tensorflow frontend

* minor fix

Enhance upsample operator to adapt onnx opset version 9 (apache#2840)

Use version invariant rustfmt (apache#2886)

[Relay][Op] Add group conv2d dispatch to topi function (apache#2870)

* [Relay][Op] Add group conv2d dispatch to topi function

* Rerun tests

[Apps] [howto_deploy] fix cxx-flags order and build directory (apache#2888)

fix prelu, now can use on 2d input and add one test (apache#2875)

Add dense schedules to __init__ for cpu (apache#2855)

* Add dense schedules to __init__ for cpu

* Add documentation for topi::shape

* Add additional imports to topi CPU __init__.

[TESTS] Improve script robustness (apache#2893)

A number of test scripts use the '|| exit 1' idiom.  This has two
issues, first process exit codes are defined to be in the range 0-255.
Second, more importantly, the idiom is fragile because it requires
that every possible failure point be explicitly coded.  This patch
removes the idiom in favour of "set -e" as used in the docker scripts
as a more robust mechanism to ensure that script failures are always
caught and propagated by default.

[Relay] Fix name of bias in testing.mlp (apache#2892)

winograd_nnpack (apache#2721)

[Relay] Fix Relay ARM CPU depthwise spatial pack schedule alter op layout issue. (apache#2861)

* Fix Relay ARM CPU spatial pack depthwise alter op layout issue.

* Update tune_relay_arm.py

[TESTS] Import script robustness (set -u) (apache#2896)

Adopt the "set -u" idiom from the docker scripts as a mechanism to
improve future robustness.

[DOCKER] Upgrade ci-cpu to latest v0.50 (apache#2901)

Allow linking against MKLML (apache#2902)

[COMMUNITY] ASF mentors (apache#2906)

[Relay] Allow converting keras.layers.Sequential (apache#2842)

* Allow converting keras.layers.Sequential

* Use existing new_var function

* Only update expr when missing

* Add test

[Relay] clean up hd, change tl (apache#2917)

Turn on USE_SORT by default (apache#2916)

[TEST] Cache test data (apache#2921)

Unified error handling in NNVM and Relay frontends (apache#2828)

add support for mxnet smooth_l1 (apache#2905)

[Relay] Add support for TupleGetItem in op fusion (apache#2914)

[Relay, TOPI]  Deformable conv2d (apache#2908)

* [Relay, TOPI] Add deformable conv2d

* Moved to op level2

* Fix lint

* Moved to level2 & bug fix

* Update comments

* Disabled flaky test of conv2d

TVM debugresult dump to Chrome Tracing (apache#2922)

[Relay] add test for second order ad (apache#2754)

* do second order

* add comment

* better name

* use tvm assert all close

* refire ci

Revert "[Relay] add test for second order ad (apache#2754)" (apache#2926)

This reverts commit f5ca991.

[Tutorial] Cache the test data in tutorial (apache#2923)

[AUTOTVM] Refactor measure build func (apache#2927)

Fix intersect of modular set (apache#2904)

Fix comment bugs and code style

[Relay, OpFusion] Fix handling TupleGetItem for nested tuples (apache#2929)

Consistent result of DetectLinearEquation() when an empy vars is passed (apache#2860)

[FRONTEND][ONNX] Some bug fixes and Shape operator fixed for relay. (apache#2850)

* [FRONTEND][ONNX] Some bug fixes and Shape operator fixed for relay.

* 	* test cases

* 	* ci error

Outdated renaming for flatten in ONNX converter (apache#2843)

[FRONTEND][TENSORFLOW] bug fix for tensorflow official slim models. (apache#2864)

* [FRONTEND][TENSORFLOW] bug fix for tensorflow official slim models.

* 	* review comments

Fix vcvtph2ps codegen (apache#2925)

Port changes

More fixes

save

save

Changes to schedules and mxnet importer

save

save

save

save

save

remove

remove
  • Loading branch information
MarisaKirisame committed Apr 15, 2019
1 parent 0634778 commit 74f0b8f
Show file tree
Hide file tree
Showing 14 changed files with 801 additions and 151 deletions.
41 changes: 40 additions & 1 deletion include/tvm/relay/expr.h
Original file line number Diff line number Diff line change
Expand Up @@ -184,6 +184,26 @@ class VarNode : public ExprNode {

RELAY_DEFINE_NODE_REF(Var, VarNode, Expr);

/*! \brief Hash Var by it's id.
* Different VarNode might has same vid, and they are considered to be the same var in such case.
* Use VarHash to hash Var by id.
*/
struct VarHash {
size_t operator()(const Var& v) const {
return v->vid.hash();
}
};

/*! \brief Compare Var by it's id.
* Different VarNode might has same vid, and they are considered to be the same var in such case.
* Use VarEqual to compare Var by id.
*/
struct VarEqual {
bool operator()(const Var& l, const Var& r) const {
return l->vid.get() == r->vid.get();
}
};

/*!
* \brief Global variable that leaves in the top-level module.
* This is used to enable recursive calls between function.
Expand Down Expand Up @@ -521,7 +541,7 @@ RELAY_DEFINE_NODE_REF(RefWrite, RefWriteNode, Expr);
* rewriting pass such as layout or type transformation.
*
* Subclass TempExprNode allows us to pattern match on
* specific kind TempExpr and use them for expression rewriting.
* specific kind of TempExpr and use them for expression rewriting.
*
* TempExpr should only be used within a pass,
*/
Expand All @@ -539,6 +559,25 @@ class TempExprNode : public ExprNode {

RELAY_DEFINE_NODE_REF(TempExpr, TempExprNode, Expr);

class Annotate;
class AnnotateNode : public ExprNode {
public:
Expr expr;
NodeRef annotation;
void VisitAttrs(tvm::AttrVisitor* v) final {
v->Visit("expr", &expr);
v->Visit("annotation", &annotation);
v->Visit("_checked_type_", &checked_type_);
}

TVM_DLL static Annotate make(Expr expr, NodeRef annotation);

static constexpr const char* _type_key = "relay.AnnotateNode";
TVM_DECLARE_NODE_TYPE_INFO(AnnotateNode, ExprNode);
};

RELAY_DEFINE_NODE_REF(Annotate, AnnotateNode, Expr);

// implementataions
inline const Type& ExprNode::checked_type() const {
CHECK(checked_type_.defined()) << "internal error: the type checker has "
Expand Down
4 changes: 4 additions & 0 deletions include/tvm/relay/expr_functor.h
Original file line number Diff line number Diff line change
Expand Up @@ -116,6 +116,7 @@ class ExprFunctor<R(const Expr& n, Args...)> {
virtual R VisitExpr_(const RefWriteNode* op, Args... args) EXPR_FUNCTOR_DEFAULT;
virtual R VisitExpr_(const ConstructorNode* op, Args... args) EXPR_FUNCTOR_DEFAULT;
virtual R VisitExpr_(const MatchNode* op, Args... args) EXPR_FUNCTOR_DEFAULT;
virtual R VisitExpr_(const AnnotateNode* op, Args... args) EXPR_FUNCTOR_DEFAULT;
virtual R VisitExprDefault_(const Node* op, Args...) {
throw Error(std::string("Do not have a default for ") + op->type_key());
}
Expand All @@ -140,6 +141,7 @@ class ExprFunctor<R(const Expr& n, Args...)> {
RELAY_EXPR_FUNCTOR_DISPATCH(RefWriteNode);
RELAY_EXPR_FUNCTOR_DISPATCH(ConstructorNode);
RELAY_EXPR_FUNCTOR_DISPATCH(MatchNode);
RELAY_EXPR_FUNCTOR_DISPATCH(AnnotateNode);
return vtable;
}
};
Expand Down Expand Up @@ -170,6 +172,7 @@ class ExprVisitor
void VisitExpr_(const RefWriteNode* op) override;
void VisitExpr_(const ConstructorNode* op) override;
void VisitExpr_(const MatchNode* op) override;
void VisitExpr_(const AnnotateNode* op) override;
virtual void VisitType(const Type& t);
virtual void VisitClause(const Clause& c);
virtual void VisitPattern(const Pattern& c);
Expand Down Expand Up @@ -212,6 +215,7 @@ class ExprMutator
Expr VisitExpr_(const RefWriteNode* op) override;
Expr VisitExpr_(const ConstructorNode* op) override;
Expr VisitExpr_(const MatchNode* op) override;
Expr VisitExpr_(const AnnotateNode* op) override;

/*!
* \brief Used to visit the types inside of expressions.
Expand Down
159 changes: 159 additions & 0 deletions python/tvm/relay/network.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,159 @@
import numpy as np
import tvm
from tvm import relay
from tvm.relay import op
from tvm.relay import create_executor, Module
from tvm.relay.backend.interpreter import TensorValue
from tvm.relay.prelude import Prelude
import aot
import collections

class OrderedSet(collections.MutableSet):

def __init__(self, iterable=None):
self.end = end = []
end += [None, end, end] # sentinel node for doubly linked list
self.map = {} # key --> [key, prev, next]
if iterable is not None:
self |= iterable

def __len__(self):
return len(self.map)

def __contains__(self, key):
return key in self.map

def add(self, key):
if key not in self.map:
end = self.end
curr = end[1]
curr[2] = end[1] = self.map[key] = [key, curr, end]

def discard(self, key):
if key in self.map:
key, prev, next = self.map.pop(key)
prev[2] = next
next[1] = prev

def __iter__(self):
end = self.end
curr = end[2]
while curr is not end:
yield curr[0]
curr = curr[2]

def __reversed__(self):
end = self.end
curr = end[1]
while curr is not end:
yield curr[0]
curr = curr[1]

def pop(self):
key = self.last()
self.discard(key)
return key

def last(self):
return self.end[1][0]

def __repr__(self):
if not self:
return '%s()' % (self.__class__.__name__,)
return '%s(%r)' % (self.__class__.__name__, list(self))

def __eq__(self, other):
if isinstance(other, OrderedSet):
return len(self) == len(other) and list(self) == list(other)
return set(self) == set(other)

def initialize(param):
ty = param.type_annotation
shape = [int(i) for i in ty.shape]
return np.random.normal(0, 1, shape).astype('float32')

def copy_var(v):
return relay.Var(v.name_hint, v.type_annotation)

class Network:
stack = []
cnt = 0

def __init__(self, *, name="f", **kwargs):
name = f"{name}_{Network.cnt}"
Network.cnt += 1
if len(Network.stack) is not 0:
mod = Network.stack[-1].mod
p = Network.stack[-1].p
else:
mod = Module()
p = Prelude(mod)

self.mod = mod
self.p = p
self.inputs = []
self.weights = OrderedSet()
self.sub_network = OrderedSet()
self.f = relay.GlobalVar(name)
self.recurse = relay.Var("recurse")
self.use_recurse = False
self.ret_type = None
body = self.build(**kwargs)
assert isinstance(body, relay.Expr)
if self.use_recurse:
inputs = [copy_var(v) for v in self.inputs]
body = relay.Let(self.recurse, relay.Function(inputs, self.call_from_outside(*inputs)), body)
self.mod[self.f] = relay.Function(self.inputs + self.all_weights(), body, self.ret_type)

def build(self, **kwargs):
Network.stack.append(self)
try:
return self.build_impl(**kwargs)
finally:
Network.stack.pop()

def build_impl(self, *args):
raise NotImplementedError

def weight(self, w):
assert isinstance(w, relay.Var)
self.weights.add(w)
return w

def input(self, i):
assert isinstance(i, relay.Var)
self.inputs.append(i)
return i

def all_weights(self):
return list(set(list(self.weights) + [w for n in self.sub_network for w in n.all_weights()]))

def call_from_outside(self, *inputs):
return self.f(*(list(inputs) + self.all_weights()))

def __call__(self, *inputs):
if self in Network.stack:
self.use_recurse = True
return self.recurse(*inputs)
else:
assert len(Network.stack) > 0
assert Network.stack[-1].mod == self.mod
assert Network.stack[-1].p == self.p
Network.stack[-1].sub_network.add(self)
return self.call_from_outside(*inputs)

def interface_type(self):
t = relay.ir_pass.infer_type(self.mod[self.f], mod=self.mod).checked_type
return relay.FuncType(t.arg_types[:len(self.inputs)], t.ret_type, t.type_params, t.type_constraints)

def get(self):
weights = []
for x in self.all_weights():
ty = x.type_annotation
assert isinstance(ty, relay.TensorType)
assert ty.dtype == 'float32'
shape = [int(i) for i in ty.shape]
weight = relay.const(np.random.normal(0, 1, shape).astype('float32'))
weights.append(weight)
inputs = [copy_var(v) for v in self.inputs]
return relay.Function(inputs, self.f(*inputs, *weights))
2 changes: 1 addition & 1 deletion python/tvm/relay/op/nn/_nn.py
Original file line number Diff line number Diff line change
Expand Up @@ -74,7 +74,7 @@ def schedule_batch_matmul(attrs, outputs, target):
with target:
return topi.generic.schedule_batch_matmul(outputs)

reg.register_pattern("nn.batch_matmul", reg.OpPattern.OUT_ELEMWISE_FUSABLE)
reg.register_pattern("nn.batch_matmul", reg.OpPattern.OPAQUE)


# conv2d
Expand Down
93 changes: 93 additions & 0 deletions python/tvm/relay/test_network.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,93 @@
from .network import Network
from tvm import relay
from tvm.relay import op, var, Var, Function, Clause, PatternConstructor, PatternVar, Match
from tvm.relay import TupleGetItem, Tuple, TensorType, TupleType

class Linear(Network):
def build_impl(self, input_size, output_size, dtype="float32"):
x = self.input(var("linear_input", shape=(1, input_size), dtype=dtype))
w = self.weight(var("linear_weight", shape=(output_size, input_size), dtype=dtype))
b = self.weight(var("linear_bias", shape=(output_size,), dtype=dtype))
return op.add(op.nn.dense(x, w), b)

def lam(names, func):
args = [Var(name) for name in names]
return Function(args, func(*args))

class LSTMCell(Network):
def build_impl(self, input_size, memory_size, dtype="float32"):
t = TensorType(shape=(1, memory_size), dtype=dtype)
i = self.input(var("lstmcell_input", shape=(1, input_size), dtype=dtype))
c = self.input(Var("lstmcell_children", self.p.l(TupleType([t, t]))))
sum = lam(["x", "y"], lambda x, y: x + y)
child_h_sum = self.p.foldl(sum,
op.zeros(shape=(1, memory_size), dtype=dtype),
self.p.map(lam(["z"], lambda z: TupleGetItem(z, 1)), c))
ioux = Linear(input_size=input_size, output_size=memory_size * 3)(i)
iouh = Linear(input_size=memory_size, output_size=memory_size * 3)(child_h_sum)
iou = ioux + iouh
fx = Linear(input_size=input_size, output_size=memory_size)(i)
fh = Linear(input_size=memory_size, output_size=memory_size)
i, o, u = op.split(iou, 3, axis=1)
i, o, u = op.sigmoid(i), op.sigmoid(o), op.tanh(u)
def foreach_children(children):
f = op.sigmoid(fh(TupleGetItem(children, 1)) + fx)
return f * TupleGetItem(children, 0)
c = self.p.foldl(sum, i * u, self.p.map(lam(["z"], foreach_children), c))
return Tuple([c, o * op.tanh(c)])

class LSTMEncoder(Network):
def build_impl(self, input_size, memory_size, dtype="float32"):
l = self.input(Var("l", self.p.l(TensorType(shape=(1, input_size), dtype=dtype))))
cell = LSTMCell(input_size=input_size, memory_size=memory_size, dtype=dtype)
return self.p.foldl(lam(["c", "x"], lambda c, x: cell(x, self.p.cons(c, self.p.nil()))),
Tuple([op.zeros(shape=(1, memory_size), dtype=dtype),
op.zeros(shape=(1, memory_size), dtype=dtype)]), l)

class LSTMTransformer(Network):
def build_impl(self, input_size, memory_size, dtype="float32"):
l = self.input(Var("l", self.p.l(TensorType(shape=(1, input_size), dtype=dtype))))
def f(c, x):
cell = LSTMCell(input_size=input_size, memory_size=memory_size, dtype=dtype)
o = cell(x, self.p.cons(c, self.p.nil()))
return Tuple([o, TupleGetItem(o, 1)])
res = self.p.map_accuml(lam(["c", "x"], f),
Tuple([op.zeros(shape=(1, memory_size), dtype=dtype),
op.zeros(shape=(1, memory_size), dtype=dtype)]),
l)
return Tuple([TupleGetItem(TupleGetItem(res, 0), 1), TupleGetItem(res, 1)])

class TreeLSTM(Network):
def build_impl(self, input_size, memory_size, dtype="float32"):
t = TensorType(shape=(1, memory_size), dtype=dtype)
self.ret_type = TupleType([t, t])
tree_type = self.p.tree(TensorType(shape=(1, input_size), dtype=dtype))
t = self.input(Var("tlstm_input", tree_type))
i = Var("i", TensorType(shape=(1, input_size), dtype=dtype))
c = Var("c", self.p.l(tree_type))
cell = LSTMCell(input_size=input_size, memory_size=memory_size, dtype=dtype)
rose_case = Clause(PatternConstructor(self.p.rose, [PatternVar(i), PatternVar(c)]),
cell(i, self.p.map(lam(["x"], self), c)))
return Match(t, [rose_case])

class BiLSTM(Network):
def build_impl(self, input_size, memory_size, dtype="float32"):
l = self.input(Var("l", self.p.l(TensorType(shape=(1, input_size), dtype=dtype))))
def LSTM(l):
return LSTMTransformer(input_size=input_size,
memory_size=memory_size,
dtype=dtype)(l)
fwd = LSTM(l)
rev = LSTM(self.p.rev(l))
lhs = op.concatenate([TupleGetItem(fwd, 0), TupleGetItem(rev, 0)], axis=1)
t = TensorType(shape=(1, memory_size), dtype=dtype)
x = Var("x", TupleType([t, t])) # cannot infer here
rhs = self.p.map(Function([x], op.concatenate([TupleGetItem(x, 0),
TupleGetItem(x, 1)],
axis=1)),
self.p.zip(TupleGetItem(fwd, 1), TupleGetItem(rev, 1)))
return Tuple([lhs, rhs])

# t = BiLSTM(input_size=128, memory_size=256)
# print("type of BidirectionalLSTM, with input_size=128, memory_size=256, is:")
# print(t.interface_type())
15 changes: 13 additions & 2 deletions src/relay/ir/expr.cc
Original file line number Diff line number Diff line change
Expand Up @@ -232,8 +232,7 @@ TVM_REGISTER_API("relay._make.Call")

TVM_STATIC_IR_FUNCTOR_REGISTER(IRPrinter, vtable)
.set_dispatch<CallNode>([](const CallNode* node, tvm::IRPrinter* p) {
p->stream << "CallNode(" << node->op << ", " << node->args << ", "
<< node->attrs << ", " << node->type_args << ")";
p->stream << "CallNode(" << node->op << ")";
});

Let LetNode::make(Var var, Expr value, Expr body) {
Expand Down Expand Up @@ -349,5 +348,17 @@ TVM_REGISTER_API("relay._expr.TempExprRealize")
*ret = temp->Realize();
});

Annotate AnnotateNode::make(Expr expr, NodeRef annotation) {
NodePtr<AnnotateNode> n = make_node<AnnotateNode>();
n->expr = std::move(expr);
n->annotation = std::move(annotation);
return Annotate(n);
}

TVM_STATIC_IR_FUNCTOR_REGISTER(IRPrinter, vtable)
.set_dispatch<AnnotateNode>([](const AnnotateNode* node, tvm::IRPrinter* p) {
p->stream << "AnnotateNode(" << node->expr << ")";
});

} // namespace relay
} // namespace tvm
Loading

0 comments on commit 74f0b8f

Please sign in to comment.