Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[TARGET] ONNX codegen #5052

Merged
merged 34 commits into from
Jul 15, 2020
Merged

[TARGET] ONNX codegen #5052

merged 34 commits into from
Jul 15, 2020

Conversation

maheshambule
Copy link
Contributor

@maheshambule maheshambule commented Mar 12, 2020

This PR adds support for ONNX codegen. Currently, only a subset of operators is supported. The conversion can be done on optimized Relay modules as well except for fused op passes.

ONNX Codegen Module is added. Runtime support is not added.

Operators supported:

  • reshape
  • conv2d
  • add
  • relu
  • transpose
  • dense
  • max_pool2d
  • batch_flatten
  • multiply
  • bias_add
  • batch_norm
  • global_avg_pool2d
  • concatenate
  • dropout
  • avg_pool2d
  • divide
  • mean
  • pad
  • softmax
  • squeeze
  • strided_slice
  • greater
  • less
  • equal
  • zeros_like
  • ones_like
  • subtract

Models tested:

  • ONNX zoo resent and squeezenet
  • TF-Slim Resnet
  • MXNet Resnet

TO-DO:

  • Increase the operator coverage to add support for all the Relay operators
  • Add support for optimization pass - AlterOpLayout
  • Add support for Relay constructs such as functions, ADTs etc
  • Add support for different versions of ONNX opsets for different operators

@maheshambule
Copy link
Contributor Author

CC: @yongwww, @zhiics, @kevinthesun

@kevinthesun
Copy link
Contributor

Since this is a new frontend module to TVM, @maheshambule could you open an RFC to discuss with community a bit about the purpose, overall design and constraints? From there, we can figure out the best way to support all similar use cases.

@kevinthesun kevinthesun added the status: need RFC need RFC discussion label Mar 12, 2020
@maheshambule
Copy link
Contributor Author

Sorry for the late response. I will post the RFC may be in the next week.

@jroesch
Copy link
Member

jroesch commented Mar 23, 2020

cc @tqchen

RE: naming

I think we should probably refer to these as exporters or some kind of specialized target. Conversion is always not sat well with me, it gives the impression that these aren't compilers to and from different IRs.

@tqchen
Copy link
Member

tqchen commented Mar 25, 2020

I agree that target is possibly a better name, we can discuss in the RFC

@maheshambule
Copy link
Contributor Author

@tqchen tqchen added status: need update need update based on feedbacks and removed status: need RFC need RFC discussion labels Apr 30, 2020
@maheshambule maheshambule changed the title Relay to ONNX converter Relay to ONNX and ONNX codegen May 14, 2020
@maheshambule
Copy link
Contributor Author

maheshambule commented May 14, 2020

@tqchen, @yongwww, @zhiics, @kevinthesun, @alexwong, based on the discussion on RFC, the PR is updated. Please help with the review.

@maheshambule maheshambule changed the title Relay to ONNX and ONNX codegen ONNX codegen May 14, 2020
@kazum kazum removed the status: need update need update based on feedbacks label Jun 15, 2020
@maheshambule
Copy link
Contributor Author

@tqchen, @zhiics, @siju-samuel Could you please approve the PR if there are no further comments.

if node_entry['name'] in self._params:
self._add_params(node_entry, idx)
else:
type = node_entry['types'][0]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it's probably preferable to rename type to something else to not overwrite Python's type function. Also, I am testing this with a very specific case and it seems like this function needs some extra logic to handle TupleType as an input (but it seems there's already some of that else where in the code). I may be able to provide an example (either script or relay graph) to illustrate if you'd like.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK will change type to relay_type. Also please do share the test script. I will check if there is an issue.
Also, the test_tuple_types test case contains different test scenarios for Tuple type. You might want to take a look.

Copy link
Contributor Author

@maheshambule maheshambule Jun 27, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@alexwong, could you please provide the example so that I can test it?

I guess your concern is about accessing only the first element of node_entry and not all of them. Below is the test script which tests concat operator which works on TupleType and this script works fine. It works fine because even if concat operator accepts TupleType the wrapper function (relay.Function) does not accept relay.Tuple but a python list or tuple and converts it to relay.Tuple and passes it to the concat. So you will never get a relay.Tuple as an input to the relay graph and node_ entry will always have one element in it.

  def verify_concatenate(shapes, axis, dtype="float32"):
        in_vars = []
        in_data = []
        for i, shape in enumerate(shapes):
            in_vars.append(relay.var("x" + str(i), relay.ty.TensorType(shape, dtype)))
            in_data.append(np.random.uniform(size=shape).astype(dtype))

        y = relay.Tuple(in_vars)
        out_tensor = relay.concatenate(y, axis)
        func = relay.Function(in_vars, out_tensor)
        verify_results(func, in_data, 'test_concatenate', rtol=1e-5, atol=1e-5)

    verify_concatenate([(2,), (2,), (2,)], -1)

Relay IR:

fn (%x0: Tensor[(2), float32], %x1: Tensor[(2), float32], %x2: Tensor[(2), float32]) {
  %0 = (%x0, %x1, %x2);
  concatenate(%0, axis=-1)
}

Copy link
Contributor

@alexwong alexwong Jul 6, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry for the late response. Is it possible for you to test with this Relay module? I serialized it to json and uploaded here, you can use load_json to get the Relay module from this. Let me know once you have it so I can remove it from S3.

  File "tests/python/relay/temptest.py", line 218, in <module>
    test_mxnet_mobilenet_ssd()
  File "tests/python/relay/temptest.py", line 212, in test_mxnet_mobilenet_ssd
    onnx_model = to_onnx(module, {}, name, path=onnx_path)
  File "/tvm/python/tvm/contrib/target/onnx.py", line 853, in to_onnx
    onnx_model = converter.convert_to_onnx(func)
  File "/tvm/python/tvm/contrib/target/onnx.py", line 677, in convert_to_onnx
    self.visit(func)
  File "/tvm/python/tvm/contrib/target/onnx.py", line 685, in visit
    super().visit(expr)
  File "/tvm/python/tvm/relay/expr_functor.py", line 44, in visit
    res = self.visit_function(expr)
  File "/tvm/python/tvm/relay/expr_functor.py", line 153, in visit_function
    [self.visit(f.body)]
  File "/tvm/python/tvm/contrib/target/onnx.py", line 685, in visit
    super().visit(expr)
  File "/tvm/python/tvm/relay/expr_functor.py", line 56, in visit
    res = self.visit_tuple(expr)
  File "/tvm/python/tvm/contrib/target/onnx.py", line 707, in visit_tuple
    self.visit(f)
  File "/tvm/python/tvm/contrib/target/onnx.py", line 685, in visit
    super().visit(expr)
  File "/tvm/python/tvm/relay/expr_functor.py", line 46, in visit
    res = self.visit_call(expr)
  File "/tvm/python/tvm/contrib/target/onnx.py", line 735, in visit_call
    self.visit(input_arg)
  File "/tvm/python/tvm/contrib/target/onnx.py", line 685, in visit
    super().visit(expr)
  File "/tvm/python/tvm/relay/expr_functor.py", line 56, in visit
    res = self.visit_tuple(expr)
  File "/tvm/python/tvm/contrib/target/onnx.py", line 707, in visit_tuple
    self.visit(f)
  File "/tvm/python/tvm/contrib/target/onnx.py", line 685, in visit
    super().visit(expr)
  File "/tvm/python/tvm/relay/expr_functor.py", line 46, in visit
    res = self.visit_call(expr)
  File "/tvm/python/tvm/contrib/target/onnx.py", line 735, in visit_call
    self.visit(input_arg)
  File "/tvm/python/tvm/contrib/target/onnx.py", line 685, in visit
    super().visit(expr)
  File "/tvm/python/tvm/relay/expr_functor.py", line 46, in visit
    res = self.visit_call(expr)
  File "/tvm/python/tvm/contrib/target/onnx.py", line 747, in visit_call
    self._add_node(node_entry, node_index)
  File "/tvm/python/tvm/contrib/target/onnx.py", line 758, in _add_node
    return converter.convert(node_entry, self._mc, self._node_dict)
  File "/tvm/python/tvm/contrib/target/onnx.py", line 530, in convert
    dtype = input_node['relay_node'].type_annotation.dtype
  File "/tvm/python/tvm/runtime/object.py", line 59, in __getattr__
    "%s has no attribute %s" % (str(type(self)), name))
AttributeError: <class 'tvm.relay.expr.Call'> has no attribute type_annotation

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@alexwong, Thanks. I have downloaded the model, will try to reproduce the issue in a day or two, and then will provide a fix as well.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@alexwong, While converting the module I faced issue related to type inference for ConstantOfShapeZeros op, I have fixed that issue. After fixing the issue, found an issue with Relay module itself. It seems arguments provided to reshape op are not correct. Below is the error I am getting. Could you please look into it?

test_onnx_model.py:36: in func_to_onnx
    onnx_model = to_onnx(mod, params, name, path=None)
../../../python/tvm/contrib/target/onnx.py:845: in to_onnx
    onnx_model = converter.convert_to_onnx(func)
../../../python/tvm/contrib/target/onnx.py:678: in convert_to_onnx
    self.visit(func)
../../../python/tvm/contrib/target/onnx.py:686: in visit
    super().visit(expr)
../../../python/tvm/relay/expr_functor.py:44: in visit
    res = self.visit_function(expr)
../../../python/tvm/relay/expr_functor.py:153: in visit_function
    self.visit(f.body)
../../../python/tvm/contrib/target/onnx.py:686: in visit
    super().visit(expr)
../../../python/tvm/relay/expr_functor.py:56: in visit
    res = self.visit_tuple(expr)
../../../python/tvm/contrib/target/onnx.py:708: in visit_tuple
    self.visit(f)
../../../python/tvm/contrib/target/onnx.py:686: in visit
    super().visit(expr)
../../../python/tvm/relay/expr_functor.py:46: in visit
    res = self.visit_call(expr)
../../../python/tvm/contrib/target/onnx.py:736: in visit_call
    self.visit(input_arg)
../../../python/tvm/contrib/target/onnx.py:686: in visit
    super().visit(expr)
../../../python/tvm/relay/expr_functor.py:46: in visit
    res = self.visit_call(expr)
../../../python/tvm/contrib/target/onnx.py:736: in visit_call
    self.visit(input_arg)
../../../python/tvm/contrib/target/onnx.py:686: in visit
    super().visit(expr)
../../../python/tvm/relay/expr_functor.py:46: in visit
    res = self.visit_call(expr)
../../../python/tvm/contrib/target/onnx.py:743: in visit_call
    node_entry['types'] = call_node_infer_type(call)
../../../python/tvm/contrib/target/onnx.py:54: in call_node_infer_type
    infer_out = infer_type(node)
../../../python/tvm/contrib/target/onnx.py:46: in infer_type
    mod = tvm.IRModule.from_expr(node)
../../../python/tvm/ir/module.py:222: in from_expr
    return _ffi_api.Module_FromExpr(expr, funcs, defs)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <tvm.runtime.packed_func.PackedFunc object at 0x118015780>
args = (CallNode(Op(reshape), [CallNode(Op(concatenate), [Tuple([CallNode(Op(nn.batch_flatten), [CallNode(Op(transpose), [Cal...1 21])], relay.attrs.ReshapeAttrs(0x7fc0c2626908), [TensorType([1, 128772], float32), TensorType([3], int32)]), {}, {})
temp_args = [{}, {}]
values = <tvm._ffi._ctypes.packed_func.TVMValue_Array_3 object at 0x11e962c40>
tcodes = <tvm._ffi._ctypes.packed_func.c_int_Array_3 object at 0x11e962040>

    def __call__(self, *args):
        """Call the function with positional arguments
    
        args : list
           The positional arguments to the function call.
        """
        temp_args = []
        values, tcodes, num_args = _make_tvm_args(args, temp_args)
        ret_val = TVMValue()
        ret_tcode = ctypes.c_int()
        if _LIB.TVMFuncCall(
                self.handle, values, tcodes, ctypes.c_int(num_args),
                ctypes.byref(ret_val), ctypes.byref(ret_tcode)) != 0:
>           raise get_last_ffi_error()
E           tvm._ffi.base.TVMError: Traceback (most recent call last):
E             [bt] (8) 9   libtvm.dylib                        0x00000001189cff09 tvm::relay::TypeInferencer::GetType(tvm::RelayExpr const&) + 297
E             [bt] (7) 8   libtvm.dylib                        0x00000001189d855a tvm::relay::ExprFunctor<tvm::Type (tvm::RelayExpr const&)>::VisitExpr(tvm::RelayExpr const&) + 138
E             [bt] (6) 7   libtvm.dylib                        0x00000001189de80f tvm::NodeFunctor<tvm::Type (tvm::runtime::ObjectRef const&, tvm::relay::ExprFunctor<tvm::Type (tvm::RelayExpr const&)>*)>::operator()(tvm::runtime::ObjectRef const&, tvm::relay::ExprFunctor<tvm::Type (tvm::RelayExpr const&)>*) const + 255
E             [bt] (5) 6   libtvm.dylib                        0x00000001189dfe58 tvm::relay::ExprFunctor<tvm::Type (tvm::RelayExpr const&)>::InitVTable()::'lambda4'(tvm::runtime::ObjectRef const&, tvm::relay::ExprFunctor<tvm::Type (tvm::RelayExpr const&)>*)::__invoke(tvm::runtime::ObjectRef const&, tvm::relay::ExprFunctor<tvm::Type (tvm::RelayExpr const&)>*) + 24
E             [bt] (4) 5   libtvm.dylib                        0x00000001189d977d tvm::relay::TypeInferencer::VisitExpr_(tvm::relay::CallNode const*) + 717
E             [bt] (3) 4   libtvm.dylib                        0x00000001189e23b2 tvm::relay::TypeInferencer::GeneralCall(tvm::relay::CallNode const*, tvm::runtime::Array<tvm::Type, void>) + 2866
E             [bt] (2) 3   libtvm.dylib                        0x00000001189e00ba tvm::relay::TypeInferencer::ReportFatalError(tvm::runtime::ObjectRef const&, tvm::Error const&) + 154
E             [bt] (1) 2   libtvm.dylib                        0x00000001181a9850 tvm::ErrorReporter::RenderErrors(tvm::IRModule const&, bool) + 5296
E             [bt] (0) 1   libtvm.dylib                        0x0000000118086781 dmlc::LogMessageFatal::~LogMessageFatal() + 113
E             File "/Users/demo/git/tvm/src/ir/error.cc", line 132
E           TVMError: 
E           Error(s) have occurred. The program has been annotated with them:
E           
E           In `main`: 
E           v0.0.4
E           fn (%cv22_0_i0: Tensor[(1, 3, 512, 512), float32]) {
E             %0 = nn.conv2d(%cv22_0_i0, meta[relay.Constant][0], strides=[2, 2], padding=[1, 1, 1, 1], channels=32, kernel_size=[3, 3]);
E             %1 = nn.batch_norm(%0, meta[relay.Constant][1], meta[relay.Constant][2], meta[relay.Constant][3], meta[relay.Constant][4]);
E             %2 = %1.0;
E             %3 = nn.relu(%2);
E             %4 = nn.conv2d(%3, meta[relay.Constant][5], padding=[1, 1, 1, 1], groups=32, channels=32, kernel_size=[3, 3]);
E             %5 = nn.batch_norm(%4, meta[relay.Constant][6], meta[relay.Constant][7], meta[relay.Constant][8], meta[relay.Constant][9]);
E             %6 = %5.0;
E             %7 = nn.relu(%6);
E             %8 = nn.conv2d(%7, meta[relay.Constant][10], padding=[0, 0, 0, 0], channels=64, kernel_size=[1, 1]);
E             %9 = nn.batch_norm(%8, meta[relay.Constant][11], meta[relay.Constant][12], meta[relay.Constant][13], meta[relay.Constant][14]);
E             %10 = %9.0;
E             %11 = nn.relu(%10);
E             %12 = nn.conv2d(%11, meta[relay.Constant][15], strides=[2, 2], padding=[1, 1, 1, 1], groups=64, channels=64, kernel_size=[3, 3]);
E             %13 = nn.batch_norm(%12, meta[relay.Constant][16], meta[relay.Constant][17], meta[relay.Constant][18], meta[relay.Constant][19]);
E             %14 = %13.0;
E             %15 = nn.relu(%14);
E             %16 = nn.conv2d(%15, meta[relay.Constant][20], padding=[0, 0, 0, 0], channels=128, kernel_size=[1, 1]);
E             %17 = nn.batch_norm(%16, meta[relay.Constant][21], meta[relay.Constant][22], meta[relay.Constant][23], meta[relay.Constant][24]);
E             %18 = %17.0;
E             %19 = nn.relu(%18);
E             %20 = nn.conv2d(%19, meta[relay.Constant][25], padding=[1, 1, 1, 1], groups=128, channels=128, kernel_size=[3, 3]);
E             %21 = nn.batch_norm(%20, meta[relay.Constant][26], meta[relay.Constant][27], meta[relay.Constant][28], meta[relay.Constant][29]);
E             %22 = %21.0;
E             %23 = nn.relu(%22);
E             %24 = nn.conv2d(%23, meta[relay.Constant][30], padding=[0, 0, 0, 0], channels=128, kernel_size=[1, 1]);
E             %25 = nn.batch_norm(%24, meta[relay.Constant][31], meta[relay.Constant][32], meta[relay.Constant][33], meta[relay.Constant][34]);
E             %26 = %25.0;
E             %27 = nn.relu(%26);
E             %28 = nn.conv2d(%27, meta[relay.Constant][35], strides=[2, 2], padding=[1, 1, 1, 1], groups=128, channels=128, kernel_size=[3, 3]);
E             %29 = nn.batch_norm(%28, meta[relay.Constant][36], meta[relay.Constant][37], meta[relay.Constant][38], meta[relay.Constant][39]);
E             %30 = %29.0;
E             %31 = nn.relu(%30);
E             %32 = nn.conv2d(%31, meta[relay.Constant][40], padding=[0, 0, 0, 0], channels=256, kernel_size=[1, 1]);
E             %33 = nn.batch_norm(%32, meta[relay.Constant][41], meta[relay.Constant][42], meta[relay.Constant][43], meta[relay.Constant][44]);
E             %34 = %33.0;
E             %35 = nn.relu(%34);
E             %36 = nn.conv2d(%35, meta[relay.Constant][45], padding=[1, 1, 1, 1], groups=256, channels=256, kernel_size=[3, 3]);
E             %37 = nn.batch_norm(%36, meta[relay.Constant][46], meta[relay.Constant][47], meta[relay.Constant][48], meta[relay.Constant][49]);
E             %38 = %37.0;
E             %39 = nn.relu(%38);
E             %40 = nn.conv2d(%39, meta[relay.Constant][50], padding=[0, 0, 0, 0], channels=256, kernel_size=[1, 1]);
E             %41 = nn.batch_norm(%40, meta[relay.Constant][51], meta[relay.Constant][52], meta[relay.Constant][53], meta[relay.Constant][54]);
E             %42 = %41.0;
E             %43 = nn.relu(%42);
E             %44 = nn.conv2d(%43, meta[relay.Constant][55], strides=[2, 2], padding=[1, 1, 1, 1], groups=256, channels=256, kernel_size=[3, 3]);
E             %45 = nn.batch_norm(%44, meta[relay.Constant][56], meta[relay.Constant][57], meta[relay.Constant][58], meta[relay.Constant][59]);
E             %46 = %45.0;
E             %47 = nn.relu(%46);
E             %48 = nn.conv2d(%47, meta[relay.Constant][60], padding=[0, 0, 0, 0], channels=512, kernel_size=[1, 1]);
E             %49 = nn.batch_norm(%48, meta[relay.Constant][61], meta[relay.Constant][62], meta[relay.Constant][63], meta[relay.Constant][64]);
E             %50 = %49.0;
E             %51 = nn.relu(%50);
E             %52 = nn.conv2d(%51, meta[relay.Constant][65], padding=[1, 1, 1, 1], groups=512, channels=512, kernel_size=[3, 3]);
E             %53 = nn.batch_norm(%52, meta[relay.Constant][66], meta[relay.Constant][67], meta[relay.Constant][68], meta[relay.Constant][69]);
E             %54 = %53.0;
E             %55 = nn.relu(%54);
E             %56 = nn.conv2d(%55, meta[relay.Constant][70], padding=[0, 0, 0, 0], channels=512, kernel_size=[1, 1]);
E             %57 = nn.batch_norm(%56, meta[relay.Constant][71], meta[relay.Constant][72], meta[relay.Constant][73], meta[relay.Constant][74]);
E             %58 = %57.0;
E             %59 = nn.relu(%58);
E             %60 = nn.conv2d(%59, meta[relay.Constant][75], padding=[1, 1, 1, 1], groups=512, channels=512, kernel_size=[3, 3]);
E             %61 = nn.batch_norm(%60, meta[relay.Constant][76], meta[relay.Constant][77], meta[relay.Constant][78], meta[relay.Constant][79]);
E             %62 = %61.0;
E             %63 = nn.relu(%62);
E             %64 = nn.conv2d(%63, meta[relay.Constant][80], padding=[0, 0, 0, 0], channels=512, kernel_size=[1, 1]);
E             %65 = nn.batch_norm(%64, meta[relay.Constant][81], meta[relay.Constant][82], meta[relay.Constant][83], meta[relay.Constant][84]);
E             %66 = %65.0;
E             %67 = nn.relu(%66);
E             %68 = nn.conv2d(%67, meta[relay.Constant][85], padding=[1, 1, 1, 1], groups=512, channels=512, kernel_size=[3, 3]);
E             %69 = nn.batch_norm(%68, meta[relay.Constant][86], meta[relay.Constant][87], meta[relay.Constant][88], meta[relay.Constant][89]);
E             %70 = %69.0;
E             %71 = nn.relu(%70);
E             %72 = nn.conv2d(%71, meta[relay.Constant][90], padding=[0, 0, 0, 0], channels=512, kernel_size=[1, 1]);
E             %73 = nn.batch_norm(%72, meta[relay.Constant][91], meta[relay.Constant][92], meta[relay.Constant][93], meta[relay.Constant][94]);
E             %74 = %73.0;
E             %75 = nn.relu(%74);
E             %76 = nn.conv2d(%75, meta[relay.Constant][95], padding=[1, 1, 1, 1], groups=512, channels=512, kernel_size=[3, 3]);
E             %77 = nn.batch_norm(%76, meta[relay.Constant][96], meta[relay.Constant][97], meta[relay.Constant][98], meta[relay.Constant][99]);
E             %78 = %77.0;
E             %79 = nn.relu(%78);
E             %80 = nn.conv2d(%79, meta[relay.Constant][100], padding=[0, 0, 0, 0], channels=512, kernel_size=[1, 1]);
E             %81 = nn.batch_norm(%80, meta[relay.Constant][101], meta[relay.Constant][102], meta[relay.Constant][103], meta[relay.Constant][104]);
E             %82 = %81.0;
E             %83 = nn.relu(%82);
E             %84 = nn.conv2d(%83, meta[relay.Constant][105], padding=[1, 1, 1, 1], groups=512, channels=512, kernel_size=[3, 3]);
E             %85 = nn.batch_norm(%84, meta[relay.Constant][106], meta[relay.Constant][107], meta[relay.Constant][108], meta[relay.Constant][109]);
E             %86 = %85.0;
E             %87 = nn.relu(%86);
E             %88 = nn.conv2d(%87, meta[relay.Constant][110], padding=[0, 0, 0, 0], channels=512, kernel_size=[1, 1]);
E             %89 = nn.batch_norm(%88, meta[relay.Constant][111], meta[relay.Constant][112], meta[relay.Constant][113], meta[relay.Constant][114]);
E             %90 = %89.0;
E             %91 = nn.relu(%90);
E             %92 = nn.conv2d(%91, meta[relay.Constant][115], padding=[1, 1, 1, 1], channels=84, kernel_size=[3, 3]);
E             %93 = nn.bias_add(%92, meta[relay.Constant][116]);
E             %94 = transpose(%93, axes=[0, 2, 3, 1]);
E             %95 = nn.batch_flatten(%94);
E             %96 = nn.conv2d(%91, meta[relay.Constant][117], strides=[2, 2], padding=[1, 1, 1, 1], groups=512, channels=512, kernel_size=[3, 3]);
E             %97 = nn.batch_norm(%96, meta[relay.Constant][118], meta[relay.Constant][119], meta[relay.Constant][120], meta[relay.Constant][121]);
E             %98 = %97.0;
E             %99 = nn.relu(%98);
E             %100 = nn.conv2d(%99, meta[relay.Constant][122], padding=[0, 0, 0, 0], channels=1024, kernel_size=[1, 1]);
E             %101 = nn.batch_norm(%100, meta[relay.Constant][123], meta[relay.Constant][124], meta[relay.Constant][125], meta[relay.Constant][126]);
E             %102 = %101.0;
E             %103 = nn.relu(%102);
E             %104 = nn.conv2d(%103, meta[relay.Constant][127], padding=[1, 1, 1, 1], groups=1024, channels=1024, kernel_size=[3, 3]);
E             %105 = nn.batch_norm(%104, meta[relay.Constant][128], meta[relay.Constant][129], meta[relay.Constant][130], meta[relay.Constant][131]);
E             %106 = %105.0;
E             %107 = nn.relu(%106);
E             %108 = nn.conv2d(%107, meta[relay.Constant][132], padding=[0, 0, 0, 0], channels=1024, kernel_size=[1, 1]);
E             %109 = nn.batch_norm(%108, meta[relay.Constant][133], meta[relay.Constant][134], meta[relay.Constant][135], meta[relay.Constant][136]);
E             %110 = %109.0;
E             %111 = nn.relu(%110);
E             %112 = nn.conv2d(%111, meta[relay.Constant][137], padding=[1, 1, 1, 1], channels=126, kernel_size=[3, 3]);
E             %113 = nn.bias_add(%112, meta[relay.Constant][138]);
E             %114 = transpose(%113, axes=[0, 2, 3, 1]);
E             %115 = nn.batch_flatten(%114);
E             %116 = nn.conv2d(%111, meta[relay.Constant][139], padding=[0, 0, 0, 0], channels=512, kernel_size=[1, 1]);
E             %117 = nn.batch_norm(%116, meta[relay.Constant][140], meta[relay.Constant][141], meta[relay.Constant][142], meta[relay.Constant][143], epsilon=0.001f, scale=False);
E             %118 = %117.0;
E             %119 = nn.relu(%118);
E             %120 = nn.conv2d(%119, meta[relay.Constant][144], strides=[2, 2], padding=[1, 1, 1, 1], channels=512, kernel_size=[3, 3]);
E             %121 = nn.batch_norm(%120, meta[relay.Constant][145], meta[relay.Constant][146], meta[relay.Constant][147], meta[relay.Constant][148], epsilon=0.001f, scale=False);
E             %122 = %121.0;
E             %123 = nn.relu(%122);
E             %124 = nn.conv2d(%123, meta[relay.Constant][149], padding=[1, 1, 1, 1], channels=126, kernel_size=[3, 3]);
E             %125 = nn.bias_add(%124, meta[relay.Constant][150]);
E             %126 = transpose(%125, axes=[0, 2, 3, 1]);
E             %127 = nn.batch_flatten(%126);
E             %128 = nn.conv2d(%123, meta[relay.Constant][151], padding=[0, 0, 0, 0], channels=512, kernel_size=[1, 1]);
E             %129 = nn.batch_norm(%128, meta[relay.Constant][152], meta[relay.Constant][153], meta[relay.Constant][154], meta[relay.Constant][155], epsilon=0.001f, scale=False);
E             %130 = %129.0;
E             %131 = nn.relu(%130);
E             %132 = nn.conv2d(%131, meta[relay.Constant][156], strides=[2, 2], padding=[1, 1, 1, 1], channels=512, kernel_size=[3, 3]);
E             %133 = nn.batch_norm(%132, meta[relay.Constant][157], meta[relay.Constant][158], meta[relay.Constant][159], meta[relay.Constant][160], epsilon=0.001f, scale=False);
E             %134 = %133.0;
E             %135 = nn.relu(%134);
E             %136 = nn.conv2d(%135, meta[relay.Constant][161], padding=[1, 1, 1, 1], channels=126, kernel_size=[3, 3]);
E             %137 = nn.bias_add(%136, meta[relay.Constant][162]);
E             %138 = transpose(%137, axes=[0, 2, 3, 1]);
E             %139 = nn.batch_flatten(%138);
E             %140 = nn.conv2d(%135, meta[relay.Constant][163], padding=[0, 0, 0, 0], channels=256, kernel_size=[1, 1]);
E             %141 = nn.batch_norm(%140, meta[relay.Constant][164], meta[relay.Constant][165], meta[relay.Constant][166], meta[relay.Constant][167], epsilon=0.001f, scale=False);
E             %142 = %141.0;
E             %143 = nn.relu(%142);
E             %144 = nn.conv2d(%143, meta[relay.Constant][168], strides=[2, 2], padding=[1, 1, 1, 1], channels=256, kernel_size=[3, 3]);
E             %145 = nn.batch_norm(%144, meta[relay.Constant][169], meta[relay.Constant][170], meta[relay.Constant][171], meta[relay.Constant][172], epsilon=0.001f, scale=False);
E             %146 = %145.0;
E             %147 = nn.relu(%146);
E             %148 = nn.conv2d(%147, meta[relay.Constant][173], padding=[1, 1, 1, 1], channels=84, kernel_size=[3, 3]);
E             %149 = nn.bias_add(%148, meta[relay.Constant][174]);
E             %150 = transpose(%149, axes=[0, 2, 3, 1]);
E             %151 = nn.batch_flatten(%150);
E             %152 = nn.conv2d(%147, meta[relay.Constant][175], padding=[0, 0, 0, 0], channels=256, kernel_size=[1, 1]);
E             %153 = nn.batch_norm(%152, meta[relay.Constant][176], meta[relay.Constant][177], meta[relay.Constant][178], meta[relay.Constant][179], epsilon=0.001f, scale=False);
E             %154 = %153.0;
E             %155 = nn.relu(%154);
E             %156 = nn.conv2d(%155, meta[relay.Constant][180], strides=[2, 2], padding=[1, 1, 1, 1], channels=256, kernel_size=[3, 3]);
E             %157 = nn.batch_norm(%156, meta[relay.Constant][181], meta[relay.Constant][182], meta[relay.Constant][183], meta[relay.Constant][184], epsilon=0.001f, scale=False);
E             %158 = %157.0;
E             %159 = nn.relu(%158);
E             %160 = nn.conv2d(%159, meta[relay.Constant][185], padding=[1, 1, 1, 1], channels=84, kernel_size=[3, 3]);
E             %161 = nn.bias_add(%160, meta[relay.Constant][186]);
E             %162 = transpose(%161, axes=[0, 2, 3, 1]);
E             %163 = nn.batch_flatten(%162);
E             %164 = (%95, %115, %127, %139, %151, %163);
E             %165 = concatenate(%164, axis=1);
E             reshape(%165, meta[relay.Constant][187], newshape=[0, -1, 21]) the function is provided too many arguments expected 1, found 2; 
E           }
E           // meta data omitted. you can use show_meta_data=True to include meta data

@kazum
Copy link
Contributor

kazum commented Jun 29, 2020

@maheshambule Can you rebase this PR onto the latest master? I think you need to update your code to address the changes in #5770.

@kazum kazum added status: need update need update based on feedbacks and removed status: need review labels Jul 1, 2020
@maheshambule
Copy link
Contributor Author

@kazum, I have merged latest master and made relevant updates to address changes in #5770.

@tqchen
Copy link
Member

tqchen commented Jul 14, 2020

@srkreddy1238 please also update, @kazum feel free to make a call to dismiss outstanding reviews

@kazum
Copy link
Contributor

kazum commented Jul 14, 2020

@kazum, I have merged latest master and made relevant updates to address changes in #5770.

Thanks, looks good. I'll merge this after @srkreddy1238 update the review status.

@kazum kazum removed the status: need update need update based on feedbacks label Jul 14, 2020
@kazum kazum dismissed srkreddy1238’s stale review July 15, 2020 20:22

The request changes are addressed and there are already approving reviews.

@kazum kazum merged commit 5c73efe into apache:master Jul 15, 2020
@kazum
Copy link
Contributor

kazum commented Jul 15, 2020

@maheshambule
Copy link
Contributor Author

Thanks @kazum ☺️

trevor-m pushed a commit to trevor-m/tvm that referenced this pull request Aug 26, 2020
* Relay to ONNX converter

* Relay to ONNX op test cases

* Relay to ONNX end to end model test cases

* Add test cases to jenkins

* CI CD fixes

* ONNX codegen

* ONNX codegen

* ONNX codegen

* onnx testcases

* ONNX codegen

* test onnx

* ONNX codegen

* shape calculation

* move onnx codegen to contrib/target

* review comments

* ONNX target use visitor

* onnx fixes

* lint fixes

* doc string changes

* review comments

* review comment fixes

* review comment

* pytest skip

* rename type to node type

* test

* Fix for constantshpae, add exp, fix for metadatamodule

* Fix cpplint

* change error tol values
trevor-m pushed a commit to trevor-m/tvm that referenced this pull request Aug 26, 2020
* Relay to ONNX converter

* Relay to ONNX op test cases

* Relay to ONNX end to end model test cases

* Add test cases to jenkins

* CI CD fixes

* ONNX codegen

* ONNX codegen

* ONNX codegen

* onnx testcases

* ONNX codegen

* test onnx

* ONNX codegen

* shape calculation

* move onnx codegen to contrib/target

* review comments

* ONNX target use visitor

* onnx fixes

* lint fixes

* doc string changes

* review comments

* review comment fixes

* review comment

* pytest skip

* rename type to node type

* test

* Fix for constantshpae, add exp, fix for metadatamodule

* Fix cpplint

* change error tol values
trevor-m pushed a commit to trevor-m/tvm that referenced this pull request Sep 2, 2020
* Relay to ONNX converter

* Relay to ONNX op test cases

* Relay to ONNX end to end model test cases

* Add test cases to jenkins

* CI CD fixes

* ONNX codegen

* ONNX codegen

* ONNX codegen

* onnx testcases

* ONNX codegen

* test onnx

* ONNX codegen

* shape calculation

* move onnx codegen to contrib/target

* review comments

* ONNX target use visitor

* onnx fixes

* lint fixes

* doc string changes

* review comments

* review comment fixes

* review comment

* pytest skip

* rename type to node type

* test

* Fix for constantshpae, add exp, fix for metadatamodule

* Fix cpplint

* change error tol values
trevor-m pushed a commit to neo-ai/tvm that referenced this pull request Sep 3, 2020
* Relay to ONNX converter

* Relay to ONNX op test cases

* Relay to ONNX end to end model test cases

* Add test cases to jenkins

* CI CD fixes

* ONNX codegen

* ONNX codegen

* ONNX codegen

* onnx testcases

* ONNX codegen

* test onnx

* ONNX codegen

* shape calculation

* move onnx codegen to contrib/target

* review comments

* ONNX target use visitor

* onnx fixes

* lint fixes

* doc string changes

* review comments

* review comment fixes

* review comment

* pytest skip

* rename type to node type

* test

* Fix for constantshpae, add exp, fix for metadatamodule

* Fix cpplint

* change error tol values
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

9 participants