We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
平台: x86 Linux Github版本: 最新2.9.6 编译方式: cmake .. -DMNN_BUILD_CONVERTER=ON -DMNN_BUILD_QUANTOOLS=ON
使用quantized.out工具进行多输入序列模型的量化时出现以下问题: (1)feature_quantize_method选择EMA方法,量化过程有报错log,量化后模型创建Session错误(Compute Shape Error) quantize with EMA: 100.00 % Reshape error: 1 -> 128 Reshape error: 1 -> 256 Can't find input extraTensorDescribe for /enc/emb_gru/Sigmoid_output_0 Can't find input extraTensorDescribe for /enc/emb_gru/Tanh_output_0 Can't find input extraTensorDescribe for /enc/emb_gru/Sigmoid_1_output_0 ...... (2)feature_quantize_method选择ADMM方法,量化过程报错Segmentation fault (core dumped) (3)feature_quantize_method选择KL方法,量化模型输出相较于量化前差异很大
尝试使用MNNPythonOfflineQuant中python工具进行量化(EMA),量化后模型中间算子输出包含nan,定位到是由Gemm转化的Conv算子输出了nan,而对于nan的输入,后续激活函数输出了1 请问以上问题是否是MNN量化实现有点问题或者我的模型设计有问题呢?(可复现的资源https://drive.google.com/file/d/1zyFIADiZxIC4nzbcndv14Ex4-YiLdJrb/view?usp=sharing)
The text was updated successfully, but these errors were encountered:
我们先看一下,可以先使用动态量化方案替代:
Sorry, something went wrong.
平台: x86 Linux Github版本: 最新2.9.6 编译方式: cmake .. -DMNN_BUILD_CONVERTER=ON -DMNN_BUILD_QUANTOOLS=ON 使用quantized.out工具进行多输入序列模型的量化时出现以下问题: (1)feature_quantize_method选择EMA方法,量化过程有报错log,量化后模型创建Session错误(Compute Shape Error) quantize with EMA: 100.00 % Reshape error: 1 -> 128 Reshape error: 1 -> 256 Can't find input extraTensorDescribe for /enc/emb_gru/Sigmoid_output_0 Can't find input extraTensorDescribe for /enc/emb_gru/Tanh_output_0 Can't find input extraTensorDescribe for /enc/emb_gru/Sigmoid_1_output_0 ...... (2)feature_quantize_method选择ADMM方法,量化过程报错Segmentation fault (core dumped) (3)feature_quantize_method选择KL方法,量化模型输出相较于量化前差异很大 尝试使用MNNPythonOfflineQuant中python工具进行量化(EMA),量化后模型中间算子输出包含nan,定位到是由Gemm转化的Conv算子输出了nan,而对于nan的输入,后续激活函数输出了1 请问以上问题是否是MNN量化实现有点问题或者我的模型设计有问题呢?(可复现的资源https://drive.google.com/file/d/1zyFIADiZxIC4nzbcndv14Ex4-YiLdJrb/view?usp=sharing) 这个链接我们公司电脑无法访问。资源可发邮箱:ld1srcv0@163.com
尝试使用MNNPythonOfflineQuant中python工具进行量化(EMA),量化后模型中间算子输出包含nan,定位到是由Gemm转化的Conv算子输出了nan,而对于nan的输入,后续激活函数输出了1 请问以上问题是否是MNN量化实现有点问题或者我的模型设计有问题呢?(可复现的资源https://drive.google.com/file/d/1zyFIADiZxIC4nzbcndv14Ex4-YiLdJrb/view?usp=sharing) 这个链接我们公司电脑无法访问。资源可发邮箱:ld1srcv0@163.com
No branches or pull requests
平台: x86 Linux
Github版本: 最新2.9.6
编译方式: cmake .. -DMNN_BUILD_CONVERTER=ON -DMNN_BUILD_QUANTOOLS=ON
使用quantized.out工具进行多输入序列模型的量化时出现以下问题:
(1)feature_quantize_method选择EMA方法,量化过程有报错log,量化后模型创建Session错误(Compute Shape Error)
quantize with EMA: 100.00 %
Reshape error: 1 -> 128
Reshape error: 1 -> 256
Can't find input extraTensorDescribe for /enc/emb_gru/Sigmoid_output_0
Can't find input extraTensorDescribe for /enc/emb_gru/Tanh_output_0
Can't find input extraTensorDescribe for /enc/emb_gru/Sigmoid_1_output_0
......
(2)feature_quantize_method选择ADMM方法,量化过程报错Segmentation fault (core dumped)
(3)feature_quantize_method选择KL方法,量化模型输出相较于量化前差异很大
尝试使用MNNPythonOfflineQuant中python工具进行量化(EMA),量化后模型中间算子输出包含nan,定位到是由Gemm转化的Conv算子输出了nan,而对于nan的输入,后续激活函数输出了1
请问以上问题是否是MNN量化实现有点问题或者我的模型设计有问题呢?(可复现的资源https://drive.google.com/file/d/1zyFIADiZxIC4nzbcndv14Ex4-YiLdJrb/view?usp=sharing)
The text was updated successfully, but these errors were encountered: