forked from apache/tvm
-
Notifications
You must be signed in to change notification settings - Fork 2
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
lint lint save save add more case save error lint lint commit do lint save fix lint wrap it back as func lint save remove dead comment fix style fix lint Update src/relay/pass/partial_eval.cc Co-Authored-By: MarisaKirisame <lolisa@marisa.moe> Update src/relay/pass/partial_eval.cc Co-Authored-By: MarisaKirisame <lolisa@marisa.moe> Update src/relay/pass/partial_eval.cc Co-Authored-By: MarisaKirisame <lolisa@marisa.moe> Update src/relay/pass/partial_eval.cc Co-Authored-By: MarisaKirisame <lolisa@marisa.moe> Update src/relay/pass/partial_eval.cc Co-Authored-By: MarisaKirisame <lolisa@marisa.moe> Update src/relay/pass/partial_eval.cc Co-Authored-By: MarisaKirisame <lolisa@marisa.moe> address review feedback pe now handle freevar. as a result preserving function is now trivial. test add basic test, implement pretty printing for generic function test lint fix segfault save save do test fix another error address comment commit save address review feedback add test for invalidate, fix error in lookup rename cont to boduy fix error and add regression test Update src/relay/pass/partial_eval.cc Co-Authored-By: MarisaKirisame <lolisa@marisa.moe> fix error, add test case fix lint remove extra line fix some error pe commit save save save save save (pe/dce broken) [DOCKER] Pin flatbuffers checkout to the last release tag (apache#2823). (apache#2879) [Relay][Text Format] Reverse CallNode Print Order (apache#2882) [NNPACK] Modernize test (apache#2868) [Relay] Add list update to prelude (apache#2866) Add missing sgx includes (apache#2878) Fix setting up hints for getaddrinfo (apache#2872) [ARITH] RewriteSimplifier: improved cmp simplification (apache#2851) do (apache#2883) [RELAY][Frontend][TF] decompile tf control flow (apache#2830) * decompile tf control flow * Add docs * remove import relay * move tests under tensorflow frontend * minor fix Enhance upsample operator to adapt onnx opset version 9 (apache#2840) Use version invariant rustfmt (apache#2886) [Relay][Op] Add group conv2d dispatch to topi function (apache#2870) * [Relay][Op] Add group conv2d dispatch to topi function * Rerun tests [Apps] [howto_deploy] fix cxx-flags order and build directory (apache#2888) fix prelu, now can use on 2d input and add one test (apache#2875) Add dense schedules to __init__ for cpu (apache#2855) * Add dense schedules to __init__ for cpu * Add documentation for topi::shape * Add additional imports to topi CPU __init__. [TESTS] Improve script robustness (apache#2893) A number of test scripts use the '|| exit 1' idiom. This has two issues, first process exit codes are defined to be in the range 0-255. Second, more importantly, the idiom is fragile because it requires that every possible failure point be explicitly coded. This patch removes the idiom in favour of "set -e" as used in the docker scripts as a more robust mechanism to ensure that script failures are always caught and propagated by default. [Relay] Fix name of bias in testing.mlp (apache#2892) winograd_nnpack (apache#2721) [Relay] Fix Relay ARM CPU depthwise spatial pack schedule alter op layout issue. (apache#2861) * Fix Relay ARM CPU spatial pack depthwise alter op layout issue. * Update tune_relay_arm.py [TESTS] Import script robustness (set -u) (apache#2896) Adopt the "set -u" idiom from the docker scripts as a mechanism to improve future robustness. [DOCKER] Upgrade ci-cpu to latest v0.50 (apache#2901) Allow linking against MKLML (apache#2902) [COMMUNITY] ASF mentors (apache#2906) [Relay] Allow converting keras.layers.Sequential (apache#2842) * Allow converting keras.layers.Sequential * Use existing new_var function * Only update expr when missing * Add test [Relay] clean up hd, change tl (apache#2917) Turn on USE_SORT by default (apache#2916) [TEST] Cache test data (apache#2921) Unified error handling in NNVM and Relay frontends (apache#2828) add support for mxnet smooth_l1 (apache#2905) [Relay] Add support for TupleGetItem in op fusion (apache#2914) [Relay, TOPI] Deformable conv2d (apache#2908) * [Relay, TOPI] Add deformable conv2d * Moved to op level2 * Fix lint * Moved to level2 & bug fix * Update comments * Disabled flaky test of conv2d TVM debugresult dump to Chrome Tracing (apache#2922) [Relay] add test for second order ad (apache#2754) * do second order * add comment * better name * use tvm assert all close * refire ci Revert "[Relay] add test for second order ad (apache#2754)" (apache#2926) This reverts commit f5ca991. [Tutorial] Cache the test data in tutorial (apache#2923) [AUTOTVM] Refactor measure build func (apache#2927) Fix intersect of modular set (apache#2904) Fix comment bugs and code style [Relay, OpFusion] Fix handling TupleGetItem for nested tuples (apache#2929) Consistent result of DetectLinearEquation() when an empy vars is passed (apache#2860) [FRONTEND][ONNX] Some bug fixes and Shape operator fixed for relay. (apache#2850) * [FRONTEND][ONNX] Some bug fixes and Shape operator fixed for relay. * * test cases * * ci error Outdated renaming for flatten in ONNX converter (apache#2843) [FRONTEND][TENSORFLOW] bug fix for tensorflow official slim models. (apache#2864) * [FRONTEND][TENSORFLOW] bug fix for tensorflow official slim models. * * review comments Fix vcvtph2ps codegen (apache#2925) Port changes More fixes save save Changes to schedules and mxnet importer save save save save save remove remove
- Loading branch information
1 parent
0634778
commit 74f0b8f
Showing
14 changed files
with
801 additions
and
151 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,159 @@ | ||
import numpy as np | ||
import tvm | ||
from tvm import relay | ||
from tvm.relay import op | ||
from tvm.relay import create_executor, Module | ||
from tvm.relay.backend.interpreter import TensorValue | ||
from tvm.relay.prelude import Prelude | ||
import aot | ||
import collections | ||
|
||
class OrderedSet(collections.MutableSet): | ||
|
||
def __init__(self, iterable=None): | ||
self.end = end = [] | ||
end += [None, end, end] # sentinel node for doubly linked list | ||
self.map = {} # key --> [key, prev, next] | ||
if iterable is not None: | ||
self |= iterable | ||
|
||
def __len__(self): | ||
return len(self.map) | ||
|
||
def __contains__(self, key): | ||
return key in self.map | ||
|
||
def add(self, key): | ||
if key not in self.map: | ||
end = self.end | ||
curr = end[1] | ||
curr[2] = end[1] = self.map[key] = [key, curr, end] | ||
|
||
def discard(self, key): | ||
if key in self.map: | ||
key, prev, next = self.map.pop(key) | ||
prev[2] = next | ||
next[1] = prev | ||
|
||
def __iter__(self): | ||
end = self.end | ||
curr = end[2] | ||
while curr is not end: | ||
yield curr[0] | ||
curr = curr[2] | ||
|
||
def __reversed__(self): | ||
end = self.end | ||
curr = end[1] | ||
while curr is not end: | ||
yield curr[0] | ||
curr = curr[1] | ||
|
||
def pop(self): | ||
key = self.last() | ||
self.discard(key) | ||
return key | ||
|
||
def last(self): | ||
return self.end[1][0] | ||
|
||
def __repr__(self): | ||
if not self: | ||
return '%s()' % (self.__class__.__name__,) | ||
return '%s(%r)' % (self.__class__.__name__, list(self)) | ||
|
||
def __eq__(self, other): | ||
if isinstance(other, OrderedSet): | ||
return len(self) == len(other) and list(self) == list(other) | ||
return set(self) == set(other) | ||
|
||
def initialize(param): | ||
ty = param.type_annotation | ||
shape = [int(i) for i in ty.shape] | ||
return np.random.normal(0, 1, shape).astype('float32') | ||
|
||
def copy_var(v): | ||
return relay.Var(v.name_hint, v.type_annotation) | ||
|
||
class Network: | ||
stack = [] | ||
cnt = 0 | ||
|
||
def __init__(self, *, name="f", **kwargs): | ||
name = f"{name}_{Network.cnt}" | ||
Network.cnt += 1 | ||
if len(Network.stack) is not 0: | ||
mod = Network.stack[-1].mod | ||
p = Network.stack[-1].p | ||
else: | ||
mod = Module() | ||
p = Prelude(mod) | ||
|
||
self.mod = mod | ||
self.p = p | ||
self.inputs = [] | ||
self.weights = OrderedSet() | ||
self.sub_network = OrderedSet() | ||
self.f = relay.GlobalVar(name) | ||
self.recurse = relay.Var("recurse") | ||
self.use_recurse = False | ||
self.ret_type = None | ||
body = self.build(**kwargs) | ||
assert isinstance(body, relay.Expr) | ||
if self.use_recurse: | ||
inputs = [copy_var(v) for v in self.inputs] | ||
body = relay.Let(self.recurse, relay.Function(inputs, self.call_from_outside(*inputs)), body) | ||
self.mod[self.f] = relay.Function(self.inputs + self.all_weights(), body, self.ret_type) | ||
|
||
def build(self, **kwargs): | ||
Network.stack.append(self) | ||
try: | ||
return self.build_impl(**kwargs) | ||
finally: | ||
Network.stack.pop() | ||
|
||
def build_impl(self, *args): | ||
raise NotImplementedError | ||
|
||
def weight(self, w): | ||
assert isinstance(w, relay.Var) | ||
self.weights.add(w) | ||
return w | ||
|
||
def input(self, i): | ||
assert isinstance(i, relay.Var) | ||
self.inputs.append(i) | ||
return i | ||
|
||
def all_weights(self): | ||
return list(set(list(self.weights) + [w for n in self.sub_network for w in n.all_weights()])) | ||
|
||
def call_from_outside(self, *inputs): | ||
return self.f(*(list(inputs) + self.all_weights())) | ||
|
||
def __call__(self, *inputs): | ||
if self in Network.stack: | ||
self.use_recurse = True | ||
return self.recurse(*inputs) | ||
else: | ||
assert len(Network.stack) > 0 | ||
assert Network.stack[-1].mod == self.mod | ||
assert Network.stack[-1].p == self.p | ||
Network.stack[-1].sub_network.add(self) | ||
return self.call_from_outside(*inputs) | ||
|
||
def interface_type(self): | ||
t = relay.ir_pass.infer_type(self.mod[self.f], mod=self.mod).checked_type | ||
return relay.FuncType(t.arg_types[:len(self.inputs)], t.ret_type, t.type_params, t.type_constraints) | ||
|
||
def get(self): | ||
weights = [] | ||
for x in self.all_weights(): | ||
ty = x.type_annotation | ||
assert isinstance(ty, relay.TensorType) | ||
assert ty.dtype == 'float32' | ||
shape = [int(i) for i in ty.shape] | ||
weight = relay.const(np.random.normal(0, 1, shape).astype('float32')) | ||
weights.append(weight) | ||
inputs = [copy_var(v) for v in self.inputs] | ||
return relay.Function(inputs, self.f(*inputs, *weights)) |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,93 @@ | ||
from .network import Network | ||
from tvm import relay | ||
from tvm.relay import op, var, Var, Function, Clause, PatternConstructor, PatternVar, Match | ||
from tvm.relay import TupleGetItem, Tuple, TensorType, TupleType | ||
|
||
class Linear(Network): | ||
def build_impl(self, input_size, output_size, dtype="float32"): | ||
x = self.input(var("linear_input", shape=(1, input_size), dtype=dtype)) | ||
w = self.weight(var("linear_weight", shape=(output_size, input_size), dtype=dtype)) | ||
b = self.weight(var("linear_bias", shape=(output_size,), dtype=dtype)) | ||
return op.add(op.nn.dense(x, w), b) | ||
|
||
def lam(names, func): | ||
args = [Var(name) for name in names] | ||
return Function(args, func(*args)) | ||
|
||
class LSTMCell(Network): | ||
def build_impl(self, input_size, memory_size, dtype="float32"): | ||
t = TensorType(shape=(1, memory_size), dtype=dtype) | ||
i = self.input(var("lstmcell_input", shape=(1, input_size), dtype=dtype)) | ||
c = self.input(Var("lstmcell_children", self.p.l(TupleType([t, t])))) | ||
sum = lam(["x", "y"], lambda x, y: x + y) | ||
child_h_sum = self.p.foldl(sum, | ||
op.zeros(shape=(1, memory_size), dtype=dtype), | ||
self.p.map(lam(["z"], lambda z: TupleGetItem(z, 1)), c)) | ||
ioux = Linear(input_size=input_size, output_size=memory_size * 3)(i) | ||
iouh = Linear(input_size=memory_size, output_size=memory_size * 3)(child_h_sum) | ||
iou = ioux + iouh | ||
fx = Linear(input_size=input_size, output_size=memory_size)(i) | ||
fh = Linear(input_size=memory_size, output_size=memory_size) | ||
i, o, u = op.split(iou, 3, axis=1) | ||
i, o, u = op.sigmoid(i), op.sigmoid(o), op.tanh(u) | ||
def foreach_children(children): | ||
f = op.sigmoid(fh(TupleGetItem(children, 1)) + fx) | ||
return f * TupleGetItem(children, 0) | ||
c = self.p.foldl(sum, i * u, self.p.map(lam(["z"], foreach_children), c)) | ||
return Tuple([c, o * op.tanh(c)]) | ||
|
||
class LSTMEncoder(Network): | ||
def build_impl(self, input_size, memory_size, dtype="float32"): | ||
l = self.input(Var("l", self.p.l(TensorType(shape=(1, input_size), dtype=dtype)))) | ||
cell = LSTMCell(input_size=input_size, memory_size=memory_size, dtype=dtype) | ||
return self.p.foldl(lam(["c", "x"], lambda c, x: cell(x, self.p.cons(c, self.p.nil()))), | ||
Tuple([op.zeros(shape=(1, memory_size), dtype=dtype), | ||
op.zeros(shape=(1, memory_size), dtype=dtype)]), l) | ||
|
||
class LSTMTransformer(Network): | ||
def build_impl(self, input_size, memory_size, dtype="float32"): | ||
l = self.input(Var("l", self.p.l(TensorType(shape=(1, input_size), dtype=dtype)))) | ||
def f(c, x): | ||
cell = LSTMCell(input_size=input_size, memory_size=memory_size, dtype=dtype) | ||
o = cell(x, self.p.cons(c, self.p.nil())) | ||
return Tuple([o, TupleGetItem(o, 1)]) | ||
res = self.p.map_accuml(lam(["c", "x"], f), | ||
Tuple([op.zeros(shape=(1, memory_size), dtype=dtype), | ||
op.zeros(shape=(1, memory_size), dtype=dtype)]), | ||
l) | ||
return Tuple([TupleGetItem(TupleGetItem(res, 0), 1), TupleGetItem(res, 1)]) | ||
|
||
class TreeLSTM(Network): | ||
def build_impl(self, input_size, memory_size, dtype="float32"): | ||
t = TensorType(shape=(1, memory_size), dtype=dtype) | ||
self.ret_type = TupleType([t, t]) | ||
tree_type = self.p.tree(TensorType(shape=(1, input_size), dtype=dtype)) | ||
t = self.input(Var("tlstm_input", tree_type)) | ||
i = Var("i", TensorType(shape=(1, input_size), dtype=dtype)) | ||
c = Var("c", self.p.l(tree_type)) | ||
cell = LSTMCell(input_size=input_size, memory_size=memory_size, dtype=dtype) | ||
rose_case = Clause(PatternConstructor(self.p.rose, [PatternVar(i), PatternVar(c)]), | ||
cell(i, self.p.map(lam(["x"], self), c))) | ||
return Match(t, [rose_case]) | ||
|
||
class BiLSTM(Network): | ||
def build_impl(self, input_size, memory_size, dtype="float32"): | ||
l = self.input(Var("l", self.p.l(TensorType(shape=(1, input_size), dtype=dtype)))) | ||
def LSTM(l): | ||
return LSTMTransformer(input_size=input_size, | ||
memory_size=memory_size, | ||
dtype=dtype)(l) | ||
fwd = LSTM(l) | ||
rev = LSTM(self.p.rev(l)) | ||
lhs = op.concatenate([TupleGetItem(fwd, 0), TupleGetItem(rev, 0)], axis=1) | ||
t = TensorType(shape=(1, memory_size), dtype=dtype) | ||
x = Var("x", TupleType([t, t])) # cannot infer here | ||
rhs = self.p.map(Function([x], op.concatenate([TupleGetItem(x, 0), | ||
TupleGetItem(x, 1)], | ||
axis=1)), | ||
self.p.zip(TupleGetItem(fwd, 1), TupleGetItem(rev, 1))) | ||
return Tuple([lhs, rhs]) | ||
|
||
# t = BiLSTM(input_size=128, memory_size=256) | ||
# print("type of BidirectionalLSTM, with input_size=128, memory_size=256, is:") | ||
# print(t.interface_type()) |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Oops, something went wrong.