-
Notifications
You must be signed in to change notification settings - Fork 31.1k
Closed
Labels
Description
System Info
transformersversion: 4.51.3- Platform: macOS-15.3.1-arm64-arm-64bit
- Python version: 3.12.9
- Huggingface_hub version: 0.30.2
- Safetensors version: 0.5.3
- Accelerate version: not installed
- Accelerate config: not found
- DeepSpeed version: not installed
- PyTorch version (GPU?): 2.7.0 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?:
Who can help?
Running the Speech2Text example on transformers 4.51.x gives either nonsense output or no output. The code I'm running is taken verbatim from https://huggingface.co/docs/transformers/en/model_doc/speech_to_text
import torch
from transformers import Speech2TextProcessor, Speech2TextForConditionalGeneration
from datasets import load_dataset
model = Speech2TextForConditionalGeneration.from_pretrained("facebook/s2t-small-librispeech-asr")
processor = Speech2TextProcessor.from_pretrained("facebook/s2t-small-librispeech-asr")
ds = load_dataset("hf-internal-testing/librispeech_asr_demo", "clean", split="validation")
inputs = processor(ds[0]["audio"]["array"], sampling_rate=ds[0]["audio"]["sampling_rate"], return_tensors="pt")
generated_ids = model.generate(inputs["input_features"], attention_mask=inputs["attention_mask"])
transcription = processor.batch_decode(generated_ids, skip_special_tokens=True)
transcriptionOn transformers 4.50.3 it gives the expected output:
['mister quilter is the apostle of the middle classes and we are glad to welcome his gospel']On transformers 4.51.x it gives either no output or nonsense output:
With Python 3.12 & transformers 4.51.3:
['that man man man man man man man man man man man man turn turn turn turn turn turn turn turn turn thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin']With Python 3.9 & transformers 4.51.3:
['']Information
- The official example scripts
- My own modified scripts
Tasks
- An officially supported task in the
examplesfolder (such as GLUE/SQuAD, ...) - My own task or dataset (give details below)
Reproduction
conda create --name temp python=3.12
conda activate temp
pip install torch torchaudio soundfile librosa datasets transformers sentencepiece
import torch
from transformers import Speech2TextProcessor, Speech2TextForConditionalGeneration
from datasets import load_dataset
model = Speech2TextForConditionalGeneration.from_pretrained("facebook/s2t-small-librispeech-asr")
processor = Speech2TextProcessor.from_pretrained("facebook/s2t-small-librispeech-asr")
ds = load_dataset("hf-internal-testing/librispeech_asr_demo", "clean", split="validation")
inputs = processor(ds[0]["audio"]["array"], sampling_rate=ds[0]["audio"]["sampling_rate"], return_tensors="pt")
generated_ids = model.generate(inputs["input_features"], attention_mask=inputs["attention_mask"])
transcription = processor.batch_decode(generated_ids, skip_special_tokens=True)
transcriptionExpected behavior
['mister quilter is the apostle of the middle classes and we are glad to welcome his gospel']