Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Whisper无法在香橙派AIPro上推理 #1858

Open
Nice-try-zzw opened this issue Dec 4, 2024 · 5 comments
Open

Whisper无法在香橙派AIPro上推理 #1858

Nice-try-zzw opened this issue Dec 4, 2024 · 5 comments
Labels
bug Something isn't working

Comments

@Nice-try-zzw
Copy link
Contributor

Describe the bug/ 问题描述 (Mandatory / 必填)
在香橙派AIPro板子上,运行如下代码:

from transformers import pipeline

transcriber = pipeline(model="openai/whisper-base")
transcriber("test.wav")

遇到报错:

Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/usr/local/miniconda3/lib/python3.9/site-packages/mindnlp/transformers/pipelines/automatic_speech_recognition.py", line 282, in __call__
    return super().__call__(inputs, **kwargs)
  File "/usr/local/miniconda3/lib/python3.9/site-packages/mindnlp/transformers/pipelines/base.py", line 1161, in __call__
    return self.run_single(inputs, preprocess_params, forward_params, postprocess_params)
  File "/usr/local/miniconda3/lib/python3.9/site-packages/mindnlp/transformers/pipelines/base.py", line 1297, in run_single
    model_outputs = self.forward(model_inputs, **forward_params)
  File "/usr/local/miniconda3/lib/python3.9/site-packages/mindnlp/transformers/pipelines/base.py", line 1103, in forward
    model_outputs = self._forward(model_inputs, **forward_params)
  File "/usr/local/miniconda3/lib/python3.9/site-packages/mindnlp/transformers/pipelines/automatic_speech_recognition.py", line 506, in _forward
    tokens = self.model.generate(
  File "/usr/local/miniconda3/lib/python3.9/site-packages/mindnlp/transformers/models/whisper/generation_whisper.py", line 537, in generate
    init_tokens = self._retrieve_init_tokens(
  File "/usr/local/miniconda3/lib/python3.9/site-packages/mindnlp/transformers/models/whisper/generation_whisper.py", line 1343, in _retrieve_init_tokens
    lang_ids = self.detect_language(
  File "/usr/local/miniconda3/lib/python3.9/site-packages/mindnlp/transformers/models/whisper/generation_whisper.py", line 1450, in detect_language
    non_lang_mask[list(generation_config.lang_to_id.values())] = False
  File "/usr/local/miniconda3/lib/python3.9/site-packages/mindspore/common/_stub_tensor.py", line 49, in fun
    return method(*arg, **kwargs)
  File "/usr/local/miniconda3/lib/python3.9/site-packages/mindspore/common/tensor.py", line 496, in __setitem__
    self.assign_value(out)
  File "/usr/local/miniconda3/lib/python3.9/site-packages/mindspore/_check_jit_forbidden_api.py", line 35, in jit_forbidden
    return fn(*args, **kwargs)
  File "/usr/local/miniconda3/lib/python3.9/site-packages/mindspore/common/tensor.py", line 942, in assign_value
    self.assign_value_cpp(value)
RuntimeError: Call aclnnSWhere failed, detail:EZ9999: Inner Error!
EZ9999: [PID: 37455] 2024-12-04-23:05:48.771.581 Parse dynamic kernel config fail.[THREAD:37777]
        TraceBack (most recent call last):
       AclOpKernelInit failed opType[THREAD:37777]
       Op SelectV2 does not has any binary.[THREAD:37784]
       Kernel Run failed. opType: 22, SelectV2[THREAD:37784]
       launch failed for SelectV2, errno:561000.[THREAD:37784]

----------------------------------------------------
- C++ Call Stack: (For framework developers)
----------------------------------------------------
mindspore/ops/kernel/ascend/pyboost/auto_generate/select.cc:53 operator()
  • Hardware Environment(Ascend/GPU/CPU) / 硬件环境:
    -- Ascend

  • Software Environment / 软件环境 (Mandatory / 必填):
    -- MindSpore version (e.g., 1.7.0.Bxxx) : 2.4.1
    -- Python version (e.g., Python 3.7.5) : 3.9.2
    -- OS platform and distribution (e.g., Linux Ubuntu 16.04): Linux orangepiaipro aarch64
    -- GCC/Compiler version (if compiled from source): 11.4.0
    -- CANN: 8.0.RC3.beta1
    -- Kernel: Ascend-cann-kernels-310b_8.0.RC3
    -- npu-smi info:

+--------------------------------------------------------------------------------------------------------+
| npu-smi 23.0.0                                   Version: 23.0.0                                       |
+-------------------------------+-----------------+------------------------------------------------------+
| NPU     Name                  | Health          | Power(W)     Temp(C)           Hugepages-Usage(page) |
| Chip    Device                | Bus-Id          | AICore(%)    Memory-Usage(MB)                        |
+===============================+=================+======================================================+
| 0       310B4                 | Alarm           | 0.0          44                15    / 15            |
| 0       0                     | NA              | 0            4724 / 15609                            |
+===============================+=================+======================================================+
  • Excute Mode / 执行模式 (Mandatory / 必填)(PyNative/Graph):

Please delete the mode not involved / 请删除不涉及的模式:
/mode pynative
/mode graph

To Reproduce / 重现步骤 (Mandatory / 必填)
Steps to reproduce the behavior:

  1. install mindnlp by source code.
  2. source CANN
  3. run code
  4. See error

Expected behavior / 预期结果 (Mandatory / 必填)
在ModelArts的910B4机器上相同的CANN版本能够推理成功,但在香橙派的310B4上失败,报告EZ9999错误,重复检查了CANN包环境是没问题的。

Screenshots/ 日志 / 截图 (Mandatory / 必填)
error

Additional context / 备注 (Optional / 选填)
Add any other context about the problem here.

@Nice-try-zzw Nice-try-zzw added the bug Something isn't working label Dec 4, 2024
@lvyufeng
Copy link
Collaborator

需要设置 set_pyboost(False)

@Nice-try-zzw
Copy link
Contributor Author

刚刚在香橙派上添加了set_pyboost(False)进行了尝试,

from mindnlp.transformers import pipeline
from mindnlp.configs import set_pyboost

transcriber = pipeline(model="openai/whisper-base")
set_pyboost(False)
transcriber("/root/whisper2om/Birth.wav") # 中文音频
transcriber("/root/whisper2om/test.wav") # 英文音频

仍旧报错:

python testmindnlp.py 
/usr/local/miniconda3/lib/python3.9/site-packages/numpy/core/getlimits.py:499: UserWarning: The value of the smallest subnormal for <class 'numpy.float64'> type is zero.
  setattr(self, word, getattr(machar, word).flat[0])
/usr/local/miniconda3/lib/python3.9/site-packages/numpy/core/getlimits.py:89: UserWarning: The value of the smallest subnormal for <class 'numpy.float64'> type is zero.
  return self._float_to_str(self.smallest_subnormal)
/usr/local/miniconda3/lib/python3.9/site-packages/numpy/core/getlimits.py:499: UserWarning: The value of the smallest subnormal for <class 'numpy.float32'> type is zero.
  setattr(self, word, getattr(machar, word).flat[0])
/usr/local/miniconda3/lib/python3.9/site-packages/numpy/core/getlimits.py:89: UserWarning: The value of the smallest subnormal for <class 'numpy.float32'> type is zero.
  return self._float_to_str(self.smallest_subnormal)
[WARNING] DEVICE(120308,e7ffc34ca020,python):2024-12-11-16:03:45.493.032 [mindspore/ccsrc/plugin/device/ascend/hal/device/ascend_memory_adapter.cc:116] Initialize] Free memory size is less than half of total memory size.Device 0 Device HBM total size:16367894528 Device HBM free size:7702208512 may be other processes occupying this card, check as: ps -ef|grep python
/usr/local/miniconda3/lib/python3.9/site-packages/mindnlp/transformers/models/whisper/generation_whisper.py:491: FutureWarning: The input name `inputs` is deprecated. Please make sure to use `input_features` instead.
  warnings.warn(
Due to a bug fix in https://github.com/huggingface/transformers/pull/28687 transcription using a multilingual Whisper will default to language detection followed by transcription instead of translation to English.This might be a breaking change for your use case. If you want to instead always translate your audio to English, make sure to pass `language='en'`.
Passing a tuple of `past_key_values` is deprecated and will be removed in Transformers v4.43.0. You should pass an instance of `EncoderDecoderCache` instead, e.g. `past_key_values=EncoderDecoderCache.from_legacy_cache(past_key_values)`.
Traceback (most recent call last):
  File "/root/mindnlp-whisper/testmindnlp.py", line 6, in <module>
    transcriber("/root/whisper2om/Birth.wav")
  File "/usr/local/miniconda3/lib/python3.9/site-packages/mindnlp/transformers/pipelines/automatic_speech_recognition.py", line 282, in __call__
    return super().__call__(inputs, **kwargs)
  File "/usr/local/miniconda3/lib/python3.9/site-packages/mindnlp/transformers/pipelines/base.py", line 1161, in __call__
    return self.run_single(inputs, preprocess_params, forward_params, postprocess_params)
  File "/usr/local/miniconda3/lib/python3.9/site-packages/mindnlp/transformers/pipelines/base.py", line 1297, in run_single
    model_outputs = self.forward(model_inputs, **forward_params)
  File "/usr/local/miniconda3/lib/python3.9/site-packages/mindnlp/transformers/pipelines/base.py", line 1103, in forward
    model_outputs = self._forward(model_inputs, **forward_params)
  File "/usr/local/miniconda3/lib/python3.9/site-packages/mindnlp/transformers/pipelines/automatic_speech_recognition.py", line 506, in _forward
    tokens = self.model.generate(
  File "/usr/local/miniconda3/lib/python3.9/site-packages/mindnlp/transformers/models/whisper/generation_whisper.py", line 537, in generate
    init_tokens = self._retrieve_init_tokens(
  File "/usr/local/miniconda3/lib/python3.9/site-packages/mindnlp/transformers/models/whisper/generation_whisper.py", line 1343, in _retrieve_init_tokens
    lang_ids = self.detect_language(
  File "/usr/local/miniconda3/lib/python3.9/site-packages/mindnlp/transformers/models/whisper/generation_whisper.py", line 1450, in detect_language
    non_lang_mask[list(generation_config.lang_to_id.values())] = False
  File "/usr/local/miniconda3/lib/python3.9/site-packages/mindspore/common/_stub_tensor.py", line 49, in fun
    return method(*arg, **kwargs)
  File "/usr/local/miniconda3/lib/python3.9/site-packages/mindspore/common/tensor.py", line 496, in __setitem__
    self.assign_value(out)
  File "/usr/local/miniconda3/lib/python3.9/site-packages/mindspore/_check_jit_forbidden_api.py", line 35, in jit_forbidden
    return fn(*args, **kwargs)
  File "/usr/local/miniconda3/lib/python3.9/site-packages/mindspore/common/tensor.py", line 942, in assign_value
    self.assign_value_cpp(value)
RuntimeError: Call aclnnSWhere failed, detail:EZ9999: Inner Error!
EZ9999: [PID: 120308] 2024-12-11-16:04:27.069.860 Parse dynamic kernel config fail.[THREAD:120481]
        TraceBack (most recent call last):
       AclOpKernelInit failed opType[THREAD:120481]
       Op SelectV2 does not has any binary.[THREAD:120482]
       Kernel Run failed. opType: 22, SelectV2[THREAD:120482]
       launch failed for SelectV2, errno:561000.[THREAD:120482]

----------------------------------------------------
- C++ Call Stack: (For framework developers)
----------------------------------------------------
mindspore/ops/kernel/ascend/pyboost/auto_generate/select.cc:53 operator()

@freesrz93
Copy link

香橙派AIPro Ascend310B1 版本, 报错完全相同

@lvyufeng
Copy link
Collaborator

最新源码安装的mindnlp吗

@freesrz93
Copy link

是的. 从 master 分支重新安装 mindnlp 仍然报错

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

3 participants