Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add QuantizeFusionLSTM pass and collect lstm scales #33797

Closed

Conversation

lidanqing-intel
Copy link
Contributor

@lidanqing-intel lidanqing-intel commented Jun 28, 2021

PR types

New features

PR changes

Others

Describe

Add QuantizeFusionLSTM pass and add LSTM scales

@paddle-bot-old
Copy link

Thanks for your contribution!
Please wait for the result of CI firstly. See Paddle CI Manual for details.

@lidanqing-intel lidanqing-intel changed the title Add QuantizeFusionLSTM pass, add fc, multi_gru, fusion_lstm into default quantize ops Add QuantizeFusionLSTM pass, fc, multi_gru, fusion_lstm into default quantize ops Jun 28, 2021
@@ -2251,7 +2251,8 @@ PDNode *patterns::QuantizePlacement::operator()(
std::unordered_set<std::string> supported_op_types =
std::unordered_set<std::string>(
{"concat", "conv2d", "elementwise_add", "fc", "matmul", "pool2d",
"prior_box", "relu", "reshape2", "transpose2", "fusion_gru"});
"prior_box", "relu", "reshape2", "transpose2", "fusion_gru",
"multi_gru", "fusion_lstm"});
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For now, multi_gru is supported in QAT, but it is not in PTQ (it'll be added #33749). As well as, here you add support for fusion_lstm for PTQ but not for QAT. I think that we should separate these lists because it is very illegible and I think it might cause bugs.
Maybe in quant2_int8_mkldnn_pass.py we should define default quant2 list by setting quantize_enabled_op_types to cpu_quantize_placement_pass.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi

  • now I only add fusion_lstm now, I removed multi_gru
  • fusion_lstm here is used for QAT, I m not sure if it is for PTQ. But QAT also need this pattern detection too. I need to detect the pattern and then substitue with fusion_lstm mkldnn kernel.

@paddle-bot-old
Copy link

paddle-bot-old bot commented Jul 9, 2021

Sorry to inform you that 2a6feca's CIs have passed for more than 7 days. To prevent PR conflicts, you need to re-run all CIs manually.

@lidanqing-intel lidanqing-intel changed the title Add QuantizeFusionLSTM pass, fc, multi_gru, fusion_lstm into default quantize ops Add QuantizeFusionLSTM pass and collect lstm scales Jul 20, 2021
@lidanqing-intel lidanqing-intel added this to the v2.2 milestone Aug 20, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants