Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[TOPI] Tunable Template for Conv2D HWCN on CUDA #4168

Merged
merged 3 commits into from
Oct 24, 2019

Conversation

comaniac
Copy link
Contributor

@comaniac comaniac commented Oct 21, 2019

This PR has done the following:

  • Enable HWCN layout for conv2d in Relay.

Now we can assign HWCN layout when creating a conv2d op using Relay:

out = relay.nn.conv2d(data, weight, ..., data_layout='HWCN', kernel_layout='HWIO')

AutoTVM can also extract the conv2d tasks with HWCN layout. It can be integrated into the layout conversion pass in the future.

  • Make the TOPI schedule of conv2d HWCN on CUDA tunable.

The tunable schedule is based on the original one and the original config is now the fallback, so it won't change the performance if users do not use AutoTVM. Here are the performance results on p3 of some conv2d workloads with different batch sizes. The scalability is defined as (Throughput A / Throughput B) / (Batch Size A / Batch Size B), so 1 means prefer scale. Note that if the config space is less than 2,000 then we stopped after exploring the whole space. We can see from the results that the tuned schedule achieves on average 3.5x speedup and 0.98 scalability.

N C H W O KH KW Stride Padding Space Default (GFLOP/s) Scalability After 2000 Trials (GFLOP/s) Scalability Speedup
1 3 224 224 32 3 3 (2, 2) (1, 1) 52 252.57 1.00 366.1 1.00 1.45
4 3 224 224 32 3 3 (2, 2) (1, 1) 520 879.11 0.87 2113.74 1.66 2.40
8 3 224 224 32 3 3 (2, 2) (1, 1) 1768 2773.76 1.58 4803.27 0.72 1.73
16 3 224 224 32 3 3 (2, 2) (1, 1) 2704 4127.37 0.74 6922.33 0.97 1.68
32 3 224 224 32 3 3 (2, 2) (1, 1) 3796 4129.1 0.50 7222.31 1.04 1.75
1 192 14 14 64 1 1 (1, 1) (0, 0) 73 130.99 1.00 342.58 1.00 2.62
4 192 14 14 64 1 1 (1, 1) (0, 0) 730 517.31 0.99 1902.55 1.41 3.68
8 192 14 14 64 1 1 (1, 1) (0, 0) 2482 1813.46 1.75 5310.76 0.80 2.93
16 192 14 14 64 1 1 (1, 1) (0, 0) 3796 4942.23 1.36 6574.4 0.45 1.33
32 192 14 14 64 1 1 (1, 1) (0, 0) 5329 5625.78 0.57 8945.74 1.20 1.59
1 1280 1 1 1000 1 1 (1, 1) (0, 0) 80 10.48 1.00 88.11 1.00 8.41
4 1280 1 1 1000 1 1 (1, 1) (0, 0) 800 40.82 0.97 332.95 0.97 8.16
8 1280 1 1 1000 1 1 (1, 1) (0, 0) 2720 158.78 1.94 898.16 0.69 5.66
16 1280 1 1 1000 1 1 (1, 1) (0, 0) 4160 397.55 1.25 1719.73 0.76 4.33
32 1280 1 1 1000 1 1 (1, 1) (0, 0) 5840 570.4 0.72 2707.75 1.10 4.75

@comaniac
Copy link
Contributor Author

comaniac commented Oct 21, 2019

@Laurawly @kevinthesun @icemelon9 please help review this PR. Thanks.

@Laurawly Laurawly self-assigned this Oct 21, 2019
Copy link
Contributor

@Laurawly Laurawly left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Overall LGTM, some minor comments.

@@ -368,7 +368,7 @@ class Vectorizer : public IRMutator {
CHECK(!op->extent.type().is_vector());
Expr extent = Mutate(op->extent);
if (extent.type().is_vector()) {
LOG(WARNING) << "Detect vectorized extent type, scalarizing...";
// LOG(WARNING) << "Detect vectorized extent type, scalarizing...";
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Remove comment

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@merrymercy Do we need this warning?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The reason we want to remove this warning is that AutoTVM may trigger it with some candidate configs in the tuning process and it looks annoying. If we really need to keep this warning, one alternative solution is trying to hide all warnings in AutoTVM, although I am not sure if it is doable since they are all managed by the same logging system.

topi/python/topi/cuda/conv2d_hwcn.py Outdated Show resolved Hide resolved
@comaniac comaniac force-pushed the cuda_conv2d_hwcn_tune branch from 601c9a7 to 1d6a62d Compare October 22, 2019 18:00
@Laurawly Laurawly merged commit 4ab7363 into apache:master Oct 24, 2019
@comaniac comaniac deleted the cuda_conv2d_hwcn_tune branch October 24, 2019 22:34
kevinthesun pushed a commit to kevinthesun/tvm that referenced this pull request Oct 30, 2019
* support conv2d HWCN in AutoTVM and Relay

* fix lint

* fix comments and unit tests
kevinthesun added a commit to neo-ai/tvm that referenced this pull request Oct 31, 2019
* [relay][vm] Separate VM runtime with executable (apache#4100)

* [relay][vm] Separate VM runtime with executable

* Address comments

* move ctx back to vm

* make only vm related fields and methods protected

* integrate seriliaztion/deserialization to executable

* create stream

* [Relay][Frontend][TF] Add tensor array ops (apache#3798)

* [Relay][Frontend][TF] Add tensor array ops

* rename

* delete test

* Move utility function

* Refactor

* fix tensor array ops

* fix test

* fix rebase

* Fix serializer bug

* Improve tf convert name lookup to use prelude api

* Fix lint

* Fix test

* Fix typo (apache#4144)

* [CI] Pin NNPack pthreadtools version (apache#4152)

* [QNN][TFLite] Parsing QNN Add op. Adding MobilenetV2. (apache#4142)

* Add lift_if_then_else pass (apache#3865)

* Add LiftIfThenElse pass

* Add more comments

* Rename and refactor

* Add description for internal data structure

* Rename a test

* Minor change

* Address comments

* Improve update_for

* [CI] Update cpu docker (apache#4153)

* [Refactor] Rename Datatype to ADT (apache#4156)

We think it will reduce the confusion with the meaning.

https://discuss.tvm.ai/t/discuss-consider-rename-vm-datatype/4339

* [Runtime] Enable option to use OpenMP thread pool (apache#4089)

* [REFACTOR][NODE][RUNTIME] Move Node to the new Object protocol. (apache#4161)

* [REFACTOR][NODE][RUNTIME] Move Node to the new Object protocol.

This PR removes the original node system, and make node as a subclass of Object.
This is a major refactor towards a better unified runtime object system.

List of changes in the refactor:

- We now hide data_ field, use Downcast explicitly to get a sub-class object.
- Removed the node system FFI in python.
- Removed the node C API, instead use PackedFunc for list and get attrs.
- Change relay::Op::set_attr_type_key(attr_key_name) to relay::Op::set_attr_type<AttrType>().
  - This change was necessary because of the new Object registration mechanism.
  - Subsequent changes to the op registrations
  - The change revealed a few previous problems that is now fixed.
- Patched up a few missing node type registration.
  - Now we will raise an error if we register object that is not registered.
- The original node.h and container.h are kept in the same location.
- Calling convention: kObjectHandle now equals the old kNodeHandle, kNodeHandle is removed.
- IRFunctor now dispatches on ObjectRef.
- Update to the new type checking API: is_type, derived_from are replaced by IsInstance.
- Removed .hash member function, instead use C++ convention hasher functors.

* Address review comments

* [CI] Move golang tests to the end (apache#4164)

* Add support for quantized multiply to Relay (apache#4141)

This patch adds multiply operator for quantized tensors.
The details of the quantized multiplication are outlined
in the code.

This builds on pull request 3927 and includes the changes
Animesh mentions in the comments on that request.

Change-Id: I555715b53d0266a91d5c03dc3dfe8fc31e7ce4e1

* Fix missspelling (apache#4166)

FIX "After connecting he usb" with "After connecting the usb"

* [Relay][Pass] Count MAC for BatchMatMul (apache#4157)

* count MAC for BatchMatMul

* update doc

* [Relay][QNN] Add unit test for int8 (apache#4159)

* [bugfix][codegen] fix casting bug in llvm codegen

* update example

* retrigger ci

* check llvm version

* [relay][vm] Reuse allocated device memory (apache#4170)

* add missing gradient check to gradient pass (apache#4169)

* merge extract_from_program and extract_from_multiple_progam (apache#4173)

* [TOPI] Added support for Mali Bifrost target (apache#4047)

* [Relay][Frontend][TF] Fix Size operator (apache#4175)

* [Relay][Frontend][TF] Fix Size operator

* Uncomment tests

* [Pass] Remove dead code (apache#4177)

* [rpc] use callback func to do send & recv (apache#4147)

* [rpc] use callback func to do send & recv. don't get fd from sock as it is deprecated in java

* fix java build

* fix min/max macro define in windows

* keep the old rpc setup for py

* add doc for CallbackChannel

* Add support and testing for tf.assert (as no-op) and tf.no_op to TF Relay frontend. (apache#4172)

* [DOCS] Add TensorFlow frontend docs (apache#4154)

* Start to update TF frontend docs

* Add rst

* Remove markdown

* Update wording

* Resolve comments

* Revert "[Relay][QNN] Add unit test for int8 (apache#4159)" (apache#4192)

This reverts commit 6f9d028.

* [cmake][ANTLR] Support setting path to ANTLR jar (apache#4176)

* Support setting path to ANTLR jar

* Update comment

* Split adaptive_pool2d_avg into sum and div (apache#4186)

* [Documentation]Fix example code in comment of tvm.build_module.build() (apache#4195)

* Fix example code in comment of tvm.build_module.build()

* Update build_module.py

* [relay] use time_evaluator for measurement (apache#4191)

* Add parser support for SUM tflite operator (apache#4182)

* [Relay] Fix memory leak in the interpreter (apache#4155)

* save

lint

* address reviewer comment

* [TOPI] Tunable Template for Conv2D HWCN on CUDA (apache#4168)

* support conv2d HWCN in AutoTVM and Relay

* fix lint

* fix comments and unit tests

* TensorCore Support using Intrinsic (apache#4136)

* add tensor core support

* avoid memory bank conflict

* fix thread sync & better performance

* better performance

* add schedule test for conv2d

* extend into BatchMatMul

* support config fragment shape and layout using intrinsic

* add TensorCore tutorial

* add int support and fix lint

* address comment

* add 32*16*8 TensorCore test

* fix wmma include logic

* [NODE][REFACTOR] Refactor reflection system in node. (apache#4189)

* [NODE][REFACTOR] Refactor reflection system in node.

- Removed the old Node, Node is now just an alias of runtime::Object
- Introduce ReflectionVTable, a new columnar dispatcher to support reflection
  - This allows us to remove vtable from most node objects
  - The VisitAttrs are registered via TVM_RESGITER_NODE_TYPE,
    they are no longer virtual.
- Consolidated serialization and reflection features into node.

* Explicit type qualification when calling destructor.

* Fix SPIRV, more comments

* hotfix the ci (apache#4199)

* [TOPI][x86] Legalize - Support int8xint8 convolution to use VNNI instructions. (apache#4196)

* [Relay] crossentropy_with_logits and its gradient (apache#4075)

* save

* lint

* [hotfix] missing include headers (apache#4204)

* [Relay][Training] Add checkpoint annotation for checkpointing memory optimization (apache#4146)

* add checkpoint annotation for checkpointing memory optimization

* add alpha-equivalence checkpoint test and fix gradient type issue

* fix build issues

* ignore checkpoint annotation when checking missing gradients

* refactor, fix checkpoint compute for tuple and add tests

* [Relay][Params] Add APIs for storing and retrieving parameters from individual functions. (apache#4194)

* Add support for attaching params

* Fix types

* Fix test

* [Relay][Frontend][ONNX] Add support for op Where (apache#4184)

* Add support for op Where

* Update impl version

* [VTA][Chisel] TSIM VTA Source Refactor (apache#4163)

* app init push

* fix on readme

* change name, add bit serial explanantion

* rm serialLoadMM, change doc

* syntax change for readme

* add parallel test functionality

* fix readme

* add python doc

* syntax

* init commit

* fix empty line

* fix typo

* [RUNTIME] Separate runtime related contrib into runtime/contrib (apache#4207)

* Fix type var docs (apache#4208)

* [Relay] Setting Legalize opt_level to 1. (apache#4198)

* [TOPI] Fix flaky testcase for check round (apache#4211)

* [Relay][Op] Enhance Upsample Operator to support float scales   (apache#4206)

* :add scale2 for upsample

* update unit test for upsampling

* support latest upsample op for multiple frontend

* fix lint

* fix lint

* fix lint

* fix lint

* update scale description and rebase

* [Relay][Quantize] Use fixed point mulplications (apache#4160)

* Update have_int8 condition to run on compute capability 7.x devices (apache#4214)

* Optimizing autotvm task extraction speed (apache#4138)

* Optimize task extraction speed

* correct pylint errors

* Delete unused function

* remove unnecessary argument

* resolve code review comments

* corrent cpp lint errors

* remove one more graph_json return value

* fix test bugs

* [Relay] Add Python type functor and tests (apache#4209)

* Add Python type functor and tests

* Lint roller

* Fix typo in packed_func.h (apache#4219)

* Improve the lowering of Qnn Dense (apache#4213)

* [QNN] Improving Dense lowering.

* - Moving get_shape method to util
- Finalizing the test cases and the code structure for optimized dense computation.

* - Fixing cpplint.

* - Addressing review comments.

* - Renaming the variables correctly.

* - Renaming the variables correctly.

* [ARITH] Fix the rule y < x && x <= y (apache#4220)

* [PYTHON] Add __init__ to the generated grammar so that it can be installed properly (apache#4223)

* [Relay][Frontend][ONNX] New Operators and Opsets to Support BERT (apache#4197)

* Added slice v10

* Added constantofshape operation and small refactor.

* Finished one_hot implementation.

* Reshape working across all bert layers.

* Fixed constantofshape and removed code duplication.

* onnx model fully ingested.

* Working on improving onnx tests.

* Changed onnx testing to use onnxruntime instead of caffe2, also formatted.

* Add arbitrary output nodes to onnx frontend.

* Added v6 tiling for bert squad 8 support.

* Small syntax fixes

* Reduced code duplication in split opset versions.

* Added batch matmul test

* Added unstack split testing.

* Adde onehot test, needs a little cleanup probably.

* Replaced deprecated constant fill with constantofshape and updated tests accordingly.

* Added tests for new opset version of slice and tile.

* lint clean up

* Lint fixes

* Changed onnx dependency

* Went back to caffe2 runtime for CI integration.

* Rebase and small typo/syntax changes.

* Added hard casting of onehot attributes to int.

* [Relay][Topi][TensorFlow][ONNX][Lang] Add support for Any op (apache#4205)

* Add support for Any op

* Support ONNX frontend

* Add doc

* Add to relay docs

* Dummy change to retrigger CI

*  Update dmlc_tvm_commit_id.txt

* Merge from upstream
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants