Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add smmla/ummla support in quantized Conv2d #6802

Merged
merged 4 commits into from
Nov 6, 2020

Conversation

giuseros
Copy link
Contributor

High level description of the submission

This introduces support for smmla/ummla instructions in TVM:

  • Added is_mmla_available function in arm_utils.py
  • Added the tiling node + tensorization schedule in conv2d_gemm.py
  • Added the intrinsic support in tensor_intrin.py
  • Added the test-case in test_topi_conv2d_int8.py

RFC

This PR is based on the following RFC: https://discuss.tvm.apache.org/t/rfc-improve-quantized-convolution-through-mmla-instructions/8336

Change-Id: Iff48c77f16fe1e64ecb733da965a879651ce635f

@giuseros
Copy link
Contributor Author

cc: @anijain2305 , @FrozenGene , @mbaret , @u99127

Copy link
Contributor

@cbalint13 cbalint13 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice addition @giuseros !
LGTM

python/tvm/topi/arm_cpu/arm_utils.py Outdated Show resolved Hide resolved
python/tvm/topi/arm_cpu/arm_utils.py Outdated Show resolved Hide resolved
python/tvm/topi/arm_cpu/conv2d_gemm.py Outdated Show resolved Hide resolved
python/tvm/topi/arm_cpu/conv2d_gemm.py Outdated Show resolved Hide resolved
python/tvm/topi/arm_cpu/tensor_intrin.py Outdated Show resolved Hide resolved
python/tvm/topi/arm_cpu/tensor_intrin.py Outdated Show resolved Hide resolved
python/tvm/topi/arm_cpu/tensor_intrin.py Show resolved Hide resolved
@giuseros
Copy link
Contributor Author

giuseros commented Nov 2, 2020

Thanks for the review @mbaret ! The only caveat now is that we would need support for LLVM-11 in the CI to run the integration test (llvm-8 does not have i8mm support). I am commenting out the test for now, and I will enable it back once the LLVM version in the CI will be updated

Copy link
Contributor

@mbaret mbaret left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, nice job again Giuseppe.

Giuseppe Rossini added 4 commits November 6, 2020 13:26
This introduces support for `smmla`/`ummla` instructions in TVM:
- Added `is_mmla_available` function in `arm_utils.py`
- Added the tiling node + tensorization schedule in `conv2d_gemm.py`
- Added the intrinsic support in `tensor_intrin.py`
- Added the test-case in `test_topi_conv2d_int8.py`

Change-Id: Iff48c77f16fe1e64ecb733da965a879651ce635f
@giuseros
Copy link
Contributor Author

giuseros commented Nov 6, 2020

Hi @anijain2305 , @FrozenGene ,

Any chance you could review this?

Thanks in advance!

@FrozenGene
Copy link
Member

About llvm11, i think we could update it for our ci

@giuseros
Copy link
Contributor Author

giuseros commented Nov 6, 2020

Hi @FrozenGene ,
Thanks for approving! Yes, I wanted to do that in a separate PR to not pollute this one. Is that ok with you?

@FrozenGene
Copy link
Member

Hi @FrozenGene ,
Thanks for approving! Yes, I wanted to do that in a separate PR to not pollute this one. Is that ok with you?

yes

@FrozenGene FrozenGene merged commit 83b75f8 into apache:main Nov 6, 2020
trevor-m pushed a commit to trevor-m/tvm that referenced this pull request Dec 2, 2020
* Add smmla/ummla support in quantized Conv2d

This introduces support for `smmla`/`ummla` instructions in TVM:
- Added `is_mmla_available` function in `arm_utils.py`
- Added the tiling node + tensorization schedule in `conv2d_gemm.py`
- Added the intrinsic support in `tensor_intrin.py`
- Added the test-case in `test_topi_conv2d_int8.py`

Change-Id: Iff48c77f16fe1e64ecb733da965a879651ce635f

* Address review comments and test failures

* Fix linting

* Rebasing
trevor-m pushed a commit to trevor-m/tvm that referenced this pull request Dec 4, 2020
* Add smmla/ummla support in quantized Conv2d

This introduces support for `smmla`/`ummla` instructions in TVM:
- Added `is_mmla_available` function in `arm_utils.py`
- Added the tiling node + tensorization schedule in `conv2d_gemm.py`
- Added the intrinsic support in `tensor_intrin.py`
- Added the test-case in `test_topi_conv2d_int8.py`

Change-Id: Iff48c77f16fe1e64ecb733da965a879651ce635f

* Address review comments and test failures

* Fix linting

* Rebasing
trevor-m pushed a commit to neo-ai/tvm that referenced this pull request Dec 4, 2020
* Add smmla/ummla support in quantized Conv2d

This introduces support for `smmla`/`ummla` instructions in TVM:
- Added `is_mmla_available` function in `arm_utils.py`
- Added the tiling node + tensorization schedule in `conv2d_gemm.py`
- Added the intrinsic support in `tensor_intrin.py`
- Added the test-case in `test_topi_conv2d_int8.py`

Change-Id: Iff48c77f16fe1e64ecb733da965a879651ce635f

* Address review comments and test failures

* Fix linting

* Rebasing
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants