Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[RELAY][BYOC] Add support for composite functions in BYOC #5261

Merged
merged 3 commits into from
Apr 8, 2020

Conversation

mbaret
Copy link
Contributor

@mbaret mbaret commented Apr 7, 2020

This PR includes two changes to support composite functions as part of the BYOC flow. First, 'check' functions have been added to the MergeComposite pass. Currently you can only do structural pattern matches which don't test any attributes or tensor sizes. The check function accepts the matching expression and can implement arbitrary logic to test whether that match should be supported. See the new test for an example which queries convolution layout.

The other change enables the annotation of composite functions in the AnnotateTarget pass. For a composite function to be recognised, you must name it according to the convention:

{compiler}.{composite_name}
eg. dnnl.add_relu

Currently, MergeComposite can only perform structural
matches. This patch introduces the ability to specify
a 'check' function alongside the pattern which can include
custom logic to determine whether an extracted pattern
should be merged.

For example, if you only want to merge 'NHWC' convolutions,
you can specify a 'check' function which queries the
data_layout value of the extracted pattern (see the test).

Change-Id: I9337ce39f10997051a286d888be38ed0d410d340
Run clang-format on merge_composite.cc

Change-Id: I1736bff798cc6d93e57519b08ab3362869098779
This patch introduces support to annotate composite functions
in the AnnotateTarget pass. In order for a composite function
to be annotated, you should name it according to the style:

{codegen}.{name}
eg. dnnl.add_relu

Change-Id: I74d6c0b506153d866f6d1feb203b32dad59f2871
@mbaret
Copy link
Contributor Author

mbaret commented Apr 7, 2020

cc @masahi @zhiics @trevor-m

@masahi masahi self-assigned this Apr 7, 2020
@masahi
Copy link
Member

masahi commented Apr 7, 2020

cc @soiferj

@alexbooth
Copy link
Contributor

Partitioning looks good for me with a simple pattern, conv2d_bias_relu. I'm seeing the check fail in IsOp for composite functions however. Maybe for another PR.

  bool IsOp(const CallNode* call, const std::string& op_name) const {
    const auto* op_node = call->op.as<OpNode>();
    CHECK(op_node) << "Expects a single op.";
    Op op = GetRef<Op>(op_node);
    return op == Op::Get(op_name);
  }

https://github.com/mbaret/tvm/blob/a197bca20a8effc48d06388190aa1b5f3b525ef6/src/relay/backend/contrib/codegen_c/codegen_c.h#L191-L205

TVMError: Check failed: op_node: Expects a single op.

@masahi
Copy link
Member

masahi commented Apr 7, 2020

Using MergeComposite, AnnotateTarget and PartitionGraph, I get the following graph for conv + bias + relu pattern:

def @dnnl_0(%dnnl_0_i0: Tensor[(1, 3, 224, 224), float32], Inline=1, Compiler="dnnl", global_symbol=runtime.String(0x55a8d9cddbd0), Primitive=1) -> Tensor[(1, 1, 224, 224), float32] {
  %2 = fn (%data: Tensor[(1, 3, 224, 224), float32], %weight: Tensor[(1, 3, 3, 3), float32], %bias: Tensor[(1, 1, 1), float32], Composite="dnnl.conv_bias_relu") -> Tensor[(1, 1, 224, 224), float32] {
    %0 = nn.conv2d(%data, %weight, padding=[1, 1, 1, 1], channels=1, kernel_size=[3, 3]) /* ty=Tensor[(1, 1, 224, 224), float32] */;
    %1 = add(%0, %bias) /* ty=Tensor[(1, 1, 224, 224), float32] */;
    nn.relu(%1) /* ty=Tensor[(1, 1, 224, 224), float32] */
  };
  %2(%dnnl_0_i0, meta[relay.Constant][0] /* ty=Tensor[(1, 3, 3, 3), float32] */ /* ty=Tensor[(1, 3, 3, 3), float32] */, meta[relay.Constant][1] /* ty=Tensor[(1, 1, 1), float32] */ /* ty=Tensor[(1, 1, 1), float32] */) /* ty=Tensor[(1, 1, 224, 224), float32] */
}

def @main(%data1: Tensor[(1, 3, 224, 224), float32]) -> Tensor[(1, 1, 224, 224), float32] {
  @dnnl_0(%data1) /* ty=Tensor[(1, 1, 224, 224), float32] */
}

Is it possible to inline composite function %2 there into dnnl_0? What I want is this:

def @dnnl_0(%dnnl_0_i0: Tensor[(1, 3, 224, 224), float32], Inline=1, Compiler="dnnl", global_symbol=runtime.String(0x5599b307c370), Primitive=1) -> Tensor[(1, 1, 224, 224), float32] {
  %0 = nn.conv2d(%dnnl_0_i0, meta[relay.Constant][0] /* ty=Tensor[(1, 3, 3, 3), float32] */ /* ty=Tensor[(1, 3, 3, 3), float32] */, padding=[1, 1, 1, 1], channels=1, kernel_size=[3, 3]) /* ty=Tensor[(1, 1, 224, 224), float32] */;
  %1 = add(%0, meta[relay.Constant][1] /* ty=Tensor[(1, 1, 1), float32] */ /* ty=Tensor[(1, 1, 1), float32] */) /* ty=Tensor[(1, 1, 224, 224), float32] */;
  nn.relu(%1) /* ty=Tensor[(1, 1, 224, 224), float32] */
}

def @main(%data: Tensor[(1, 3, 224, 224), float32]) -> Tensor[(1, 1, 224, 224), float32] {
  @dnnl_0(%data) /* ty=Tensor[(1, 1, 224, 224), float32] */
}

Otherwise I have to support function call in DNNL codegen. @zhiics @mbaret

UPDATE: hmm if I inline the composite function, I lose the composite attribute and hence cannot detect fused call. Is supporting function call in the DNNL codegen a better option?

@trevor-m
Copy link
Contributor

trevor-m commented Apr 7, 2020

Using MergeComposite, AnnotateTarget and PartitionGraph, I get the following graph for conv + bias + relu pattern:

def @dnnl_0(%dnnl_0_i0: Tensor[(1, 3, 224, 224), float32], Inline=1, Compiler="dnnl", global_symbol=runtime.String(0x55a8d9cddbd0), Primitive=1) -> Tensor[(1, 1, 224, 224), float32] {
  %2 = fn (%data: Tensor[(1, 3, 224, 224), float32], %weight: Tensor[(1, 3, 3, 3), float32], %bias: Tensor[(1, 1, 1), float32], Composite="dnnl.conv_bias_relu") -> Tensor[(1, 1, 224, 224), float32] {
    %0 = nn.conv2d(%data, %weight, padding=[1, 1, 1, 1], channels=1, kernel_size=[3, 3]) /* ty=Tensor[(1, 1, 224, 224), float32] */;
    %1 = add(%0, %bias) /* ty=Tensor[(1, 1, 224, 224), float32] */;
    nn.relu(%1) /* ty=Tensor[(1, 1, 224, 224), float32] */
  };
  %2(%dnnl_0_i0, meta[relay.Constant][0] /* ty=Tensor[(1, 3, 3, 3), float32] */ /* ty=Tensor[(1, 3, 3, 3), float32] */, meta[relay.Constant][1] /* ty=Tensor[(1, 1, 1), float32] */ /* ty=Tensor[(1, 1, 1), float32] */) /* ty=Tensor[(1, 1, 224, 224), float32] */
}

def @main(%data1: Tensor[(1, 3, 224, 224), float32]) -> Tensor[(1, 1, 224, 224), float32] {
  @dnnl_0(%data1) /* ty=Tensor[(1, 1, 224, 224), float32] */
}

Is it possible to inline composite function %2 there into dnnl_0? What I want is this:

def @dnnl_0(%dnnl_0_i0: Tensor[(1, 3, 224, 224), float32], Inline=1, Compiler="dnnl", global_symbol=runtime.String(0x5599b307c370), Primitive=1) -> Tensor[(1, 1, 224, 224), float32] {
  %0 = nn.conv2d(%dnnl_0_i0, meta[relay.Constant][0] /* ty=Tensor[(1, 3, 3, 3), float32] */ /* ty=Tensor[(1, 3, 3, 3), float32] */, padding=[1, 1, 1, 1], channels=1, kernel_size=[3, 3]) /* ty=Tensor[(1, 1, 224, 224), float32] */;
  %1 = add(%0, meta[relay.Constant][1] /* ty=Tensor[(1, 1, 1), float32] */ /* ty=Tensor[(1, 1, 1), float32] */) /* ty=Tensor[(1, 1, 224, 224), float32] */;
  nn.relu(%1) /* ty=Tensor[(1, 1, 224, 224), float32] */
}

def @main(%data: Tensor[(1, 3, 224, 224), float32]) -> Tensor[(1, 1, 224, 224), float32] {
  @dnnl_0(%data) /* ty=Tensor[(1, 1, 224, 224), float32] */
}

Otherwise I have to support function call in DNNL codegen. @zhiics @mbaret

UPDATE: hmm if I inline the composite function, I lose the composite attribute and hence cannot detect fused call. Is supporting function call in the DNNL codegen a better option?

Hi @masahi, I think it is best to leave this to the codegen. That way we have the option to handle a composite all at once at call node or invididually by just doing VisitExpr(func->body) when we encounter one.

@masahi
Copy link
Member

masahi commented Apr 7, 2020

@trevor-m Yeah agree. I'm looking to support either function call or "manual inline" in the codegen.

@masahi
Copy link
Member

masahi commented Apr 8, 2020

Ok got the DNNL conv + bias + relu working (masahi/tvm@byoc-composite...masahi:dnnl-composite). This is a reimplmentation of #4741 based on composite annotate support in this PR, rather than the manual approach. I think it is much cleaner.

I can send a new PR as soon as this PR is merged.

@zhiics
Copy link
Member

zhiics commented Apr 8, 2020

@masahi @trevor-m can you guys help review as well?

@zhiics
Copy link
Member

zhiics commented Apr 8, 2020

@alexbooth Thanks for pointing out. The purpose of the codegen_c is mainly for quick prototyping. Would you be interested to send a PR to add the support for composite function after this is merged?

@zhiics
Copy link
Member

zhiics commented Apr 8, 2020

@masahi yeah, inlining the function you pointed out is a little tricky because it varies case by case. I intentionally made it a bit more conservative and left it for the external codegen to handle. In the long run, it would be more helpful if we make the passes more configurable.

@masahi
Copy link
Member

masahi commented Apr 8, 2020

@zhiics Yes, I'll review and merge this today
@alexbooth The error you get is because now with composite, op inside CallNode can be a FunctionNode, rather than OpNode. I'll send a DNNL change today so you can take a look.

@masahi masahi merged commit d2de35e into apache:master Apr 8, 2020
@masahi
Copy link
Member

masahi commented Apr 8, 2020

Thanks @mbaret @zhiics

trevor-m pushed a commit to trevor-m/tvm that referenced this pull request Apr 16, 2020
* [RELAY] Add 'check' functions to MergeComposite

Currently, MergeComposite can only perform structural
matches. This patch introduces the ability to specify
a 'check' function alongside the pattern which can include
custom logic to determine whether an extracted pattern
should be merged.

For example, if you only want to merge 'NHWC' convolutions,
you can specify a 'check' function which queries the
data_layout value of the extracted pattern (see the test).

Change-Id: I9337ce39f10997051a286d888be38ed0d410d340

* [RELAY] Reformat merge_composite.cc

Run clang-format on merge_composite.cc

Change-Id: I1736bff798cc6d93e57519b08ab3362869098779

* [RELAY][BYOC] Support composite functions in AnnotateTarget

This patch introduces support to annotate composite functions
in the AnnotateTarget pass. In order for a composite function
to be annotated, you should name it according to the style:

{codegen}.{name}
eg. dnnl.add_relu

Change-Id: I74d6c0b506153d866f6d1feb203b32dad59f2871
zhiics pushed a commit to neo-ai/tvm that referenced this pull request Apr 17, 2020
* [RELAY] Add 'check' functions to MergeComposite

Currently, MergeComposite can only perform structural
matches. This patch introduces the ability to specify
a 'check' function alongside the pattern which can include
custom logic to determine whether an extracted pattern
should be merged.

For example, if you only want to merge 'NHWC' convolutions,
you can specify a 'check' function which queries the
data_layout value of the extracted pattern (see the test).

Change-Id: I9337ce39f10997051a286d888be38ed0d410d340

* [RELAY] Reformat merge_composite.cc

Run clang-format on merge_composite.cc

Change-Id: I1736bff798cc6d93e57519b08ab3362869098779

* [RELAY][BYOC] Support composite functions in AnnotateTarget

This patch introduces support to annotate composite functions
in the AnnotateTarget pass. In order for a composite function
to be annotated, you should name it according to the style:

{codegen}.{name}
eg. dnnl.add_relu

Change-Id: I74d6c0b506153d866f6d1feb203b32dad59f2871
dpankratz pushed a commit to dpankratz/incubator-tvm that referenced this pull request Apr 24, 2020
* [RELAY] Add 'check' functions to MergeComposite

Currently, MergeComposite can only perform structural
matches. This patch introduces the ability to specify
a 'check' function alongside the pattern which can include
custom logic to determine whether an extracted pattern
should be merged.

For example, if you only want to merge 'NHWC' convolutions,
you can specify a 'check' function which queries the
data_layout value of the extracted pattern (see the test).

Change-Id: I9337ce39f10997051a286d888be38ed0d410d340

* [RELAY] Reformat merge_composite.cc

Run clang-format on merge_composite.cc

Change-Id: I1736bff798cc6d93e57519b08ab3362869098779

* [RELAY][BYOC] Support composite functions in AnnotateTarget

This patch introduces support to annotate composite functions
in the AnnotateTarget pass. In order for a composite function
to be annotated, you should name it according to the style:

{codegen}.{name}
eg. dnnl.add_relu

Change-Id: I74d6c0b506153d866f6d1feb203b32dad59f2871
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants