-
Notifications
You must be signed in to change notification settings - Fork 209
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Whisper无法在香橙派AIPro上推理 #1858
Labels
bug
Something isn't working
Comments
需要设置 set_pyboost(False) |
刚刚在香橙派上添加了 from mindnlp.transformers import pipeline
from mindnlp.configs import set_pyboost
transcriber = pipeline(model="openai/whisper-base")
set_pyboost(False)
transcriber("/root/whisper2om/Birth.wav") # 中文音频
transcriber("/root/whisper2om/test.wav") # 英文音频 仍旧报错: python testmindnlp.py
/usr/local/miniconda3/lib/python3.9/site-packages/numpy/core/getlimits.py:499: UserWarning: The value of the smallest subnormal for <class 'numpy.float64'> type is zero.
setattr(self, word, getattr(machar, word).flat[0])
/usr/local/miniconda3/lib/python3.9/site-packages/numpy/core/getlimits.py:89: UserWarning: The value of the smallest subnormal for <class 'numpy.float64'> type is zero.
return self._float_to_str(self.smallest_subnormal)
/usr/local/miniconda3/lib/python3.9/site-packages/numpy/core/getlimits.py:499: UserWarning: The value of the smallest subnormal for <class 'numpy.float32'> type is zero.
setattr(self, word, getattr(machar, word).flat[0])
/usr/local/miniconda3/lib/python3.9/site-packages/numpy/core/getlimits.py:89: UserWarning: The value of the smallest subnormal for <class 'numpy.float32'> type is zero.
return self._float_to_str(self.smallest_subnormal)
[WARNING] DEVICE(120308,e7ffc34ca020,python):2024-12-11-16:03:45.493.032 [mindspore/ccsrc/plugin/device/ascend/hal/device/ascend_memory_adapter.cc:116] Initialize] Free memory size is less than half of total memory size.Device 0 Device HBM total size:16367894528 Device HBM free size:7702208512 may be other processes occupying this card, check as: ps -ef|grep python
/usr/local/miniconda3/lib/python3.9/site-packages/mindnlp/transformers/models/whisper/generation_whisper.py:491: FutureWarning: The input name `inputs` is deprecated. Please make sure to use `input_features` instead.
warnings.warn(
Due to a bug fix in https://github.com/huggingface/transformers/pull/28687 transcription using a multilingual Whisper will default to language detection followed by transcription instead of translation to English.This might be a breaking change for your use case. If you want to instead always translate your audio to English, make sure to pass `language='en'`.
Passing a tuple of `past_key_values` is deprecated and will be removed in Transformers v4.43.0. You should pass an instance of `EncoderDecoderCache` instead, e.g. `past_key_values=EncoderDecoderCache.from_legacy_cache(past_key_values)`.
Traceback (most recent call last):
File "/root/mindnlp-whisper/testmindnlp.py", line 6, in <module>
transcriber("/root/whisper2om/Birth.wav")
File "/usr/local/miniconda3/lib/python3.9/site-packages/mindnlp/transformers/pipelines/automatic_speech_recognition.py", line 282, in __call__
return super().__call__(inputs, **kwargs)
File "/usr/local/miniconda3/lib/python3.9/site-packages/mindnlp/transformers/pipelines/base.py", line 1161, in __call__
return self.run_single(inputs, preprocess_params, forward_params, postprocess_params)
File "/usr/local/miniconda3/lib/python3.9/site-packages/mindnlp/transformers/pipelines/base.py", line 1297, in run_single
model_outputs = self.forward(model_inputs, **forward_params)
File "/usr/local/miniconda3/lib/python3.9/site-packages/mindnlp/transformers/pipelines/base.py", line 1103, in forward
model_outputs = self._forward(model_inputs, **forward_params)
File "/usr/local/miniconda3/lib/python3.9/site-packages/mindnlp/transformers/pipelines/automatic_speech_recognition.py", line 506, in _forward
tokens = self.model.generate(
File "/usr/local/miniconda3/lib/python3.9/site-packages/mindnlp/transformers/models/whisper/generation_whisper.py", line 537, in generate
init_tokens = self._retrieve_init_tokens(
File "/usr/local/miniconda3/lib/python3.9/site-packages/mindnlp/transformers/models/whisper/generation_whisper.py", line 1343, in _retrieve_init_tokens
lang_ids = self.detect_language(
File "/usr/local/miniconda3/lib/python3.9/site-packages/mindnlp/transformers/models/whisper/generation_whisper.py", line 1450, in detect_language
non_lang_mask[list(generation_config.lang_to_id.values())] = False
File "/usr/local/miniconda3/lib/python3.9/site-packages/mindspore/common/_stub_tensor.py", line 49, in fun
return method(*arg, **kwargs)
File "/usr/local/miniconda3/lib/python3.9/site-packages/mindspore/common/tensor.py", line 496, in __setitem__
self.assign_value(out)
File "/usr/local/miniconda3/lib/python3.9/site-packages/mindspore/_check_jit_forbidden_api.py", line 35, in jit_forbidden
return fn(*args, **kwargs)
File "/usr/local/miniconda3/lib/python3.9/site-packages/mindspore/common/tensor.py", line 942, in assign_value
self.assign_value_cpp(value)
RuntimeError: Call aclnnSWhere failed, detail:EZ9999: Inner Error!
EZ9999: [PID: 120308] 2024-12-11-16:04:27.069.860 Parse dynamic kernel config fail.[THREAD:120481]
TraceBack (most recent call last):
AclOpKernelInit failed opType[THREAD:120481]
Op SelectV2 does not has any binary.[THREAD:120482]
Kernel Run failed. opType: 22, SelectV2[THREAD:120482]
launch failed for SelectV2, errno:561000.[THREAD:120482]
----------------------------------------------------
- C++ Call Stack: (For framework developers)
----------------------------------------------------
mindspore/ops/kernel/ascend/pyboost/auto_generate/select.cc:53 operator() |
香橙派AIPro Ascend310B1 版本, 报错完全相同 |
最新源码安装的mindnlp吗 |
是的. 从 master 分支重新安装 mindnlp 仍然报错 |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Describe the bug/ 问题描述 (Mandatory / 必填)
在香橙派AIPro板子上,运行如下代码:
遇到报错:
Hardware Environment(
Ascend
/GPU
/CPU
) / 硬件环境:-- Ascend
Software Environment / 软件环境 (Mandatory / 必填):
-- MindSpore version (e.g., 1.7.0.Bxxx) : 2.4.1
-- Python version (e.g., Python 3.7.5) : 3.9.2
-- OS platform and distribution (e.g., Linux Ubuntu 16.04): Linux orangepiaipro aarch64
-- GCC/Compiler version (if compiled from source): 11.4.0
-- CANN: 8.0.RC3.beta1
-- Kernel: Ascend-cann-kernels-310b_8.0.RC3
-- npu-smi info:
PyNative
/Graph
):To Reproduce / 重现步骤 (Mandatory / 必填)
Steps to reproduce the behavior:
Expected behavior / 预期结果 (Mandatory / 必填)
在ModelArts的910B4机器上相同的CANN版本能够推理成功,但在香橙派的310B4上失败,报告EZ9999错误,重复检查了CANN包环境是没问题的。
Screenshots/ 日志 / 截图 (Mandatory / 必填)
Additional context / 备注 (Optional / 选填)
Add any other context about the problem here.
The text was updated successfully, but these errors were encountered: