Skip to content
This repository has been archived by the owner on Nov 17, 2023. It is now read-only.

Update imagenet quantization script for MKL-DNN #14407

Closed
wants to merge 1 commit into from

Conversation

ciyongch
Copy link
Contributor

Description

This PR is to skip quantizing FullyConnected layers when its inputs are signed int8 and the subgraph backend is set to MKLDNN, due to the limitation in current version of MKL-DNN. This feature will be enabled when MKL-DNN is upgraded to 0.18 or higher.
For not using MKLDNN subgraph backend (in general mode), users can still use both int8 and uint8 quantized FullyConnected for CPU platform (in this case, int8 will go to IGEMM path, but not providing requantize/dequantize fusion feature).

Checklist

Essentials

Please feel free to remove inapplicable items for your PR.

  • [] The PR title starts with [MXNET-$JIRA_ID], where $JIRA_ID refers to the relevant JIRA issue created (except PRs with tiny changes)
  • Changes are complete (i.e. I finished coding on this PR)
  • All changes have test coverage:
  • Unit tests are added for small changes to verify correctness (e.g. adding a new operator)
  • Nightly tests are added for complicated/long-running ones (e.g. changing distributed kvstore)
  • Build tests will be added for build configuration changes (e.g. adding a new build option with NCCL)
  • Code is well-documented:
  • For user-facing API changes, API doc string has been updated.
  • For new C++ functions in header files, their functionalities and arguments are documented.
  • For new examples, README.md is added to explain the what the example does, the source of the dataset, expected performance on test set and reference to the original paper if applicable
  • Check the API doc at http://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-$PR_ID/$BUILD_ID/index.html
  • To the my best knowledge, examples are either not affected by this change, or have been fixed to be compatible with this change

Changes

  • Feature1, tests, (and when applicable, API doc)
  • Feature2, tests, (and when applicable, API doc)

Comments

  • If this change is a backward incompatible change, why must this change be made.
  • Interesting edge cases to note here
    @pengzhao-intel @TaoLv

@ciyongch ciyongch requested a review from szha as a code owner March 13, 2019 02:35
if exclude_first_conv:
excluded_sym_names += ['conv0']
elif args.model == 'imagenet1k-inception-bn':
rgb_mean = '123.68,116.779,103.939'
rgb_std = '1,1,1'
excluded_sym_names += ['flatten']
# TODO, fc1 will be enabled after MKL-DNN 0.18
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

v0.18 will be available soon. @TaoLv is WIP for the upgrading.
Could we wait several days for v0.18?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For now, these three examples will fail due to limitation of quantized FullyConnected (as it's enabled in current script).
If v0.18 will be merged into master in a short time, then this would be fine.

@karan6181
Copy link
Contributor

@mxnet-label-bot add [Quantization, MKLDNN]

@marcoabreu marcoabreu added MKLDNN Quantization Issues/Feature Requests related to Quantization labels Mar 13, 2019
@pengzhao-intel
Copy link
Contributor

pengzhao-intel commented Mar 18, 2019

@ciyongch 0.18 PR is merged. Please check if the problem is fixed. #13668

@ciyongch
Copy link
Contributor Author

@pengzhao-intel , sounds great, there's some check in current quantized FullyConnected to enable signed int8 inputs needs to be removed. After that, the problem should be fixed. Another PR to enable this is working in progress :)

@ciyongch
Copy link
Contributor Author

Since PR #14466 enables signed int8 inputs to quantized FullyConnected, no need to apply this patch. Close it now.

@ciyongch ciyongch closed this Mar 19, 2019
@ciyongch ciyongch deleted the update_imagenet_qsym branch May 22, 2019 02:35
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
MKLDNN Quantization Issues/Feature Requests related to Quantization
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants