Skip to content

Commit

Permalink
restructure readme,
Browse files Browse the repository at this point in the history
  • Loading branch information
m-bain committed Dec 19, 2022
1 parent 228b857 commit cbaeb85
Show file tree
Hide file tree
Showing 2 changed files with 29 additions and 23 deletions.
50 changes: 28 additions & 22 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,8 +4,8 @@
<img src="https://img.shields.io/github/stars/m-bain/whisperX.svg?colorA=orange&colorB=orange&logo=github"
alt="GitHub stars">
</a>
<a href="https://github.com/hanxiao/bert-as-service/issues">
<img src="https://img.shields.io/github/issues/hanxiao/bert-as-service.svg"
<a href="https://github.com/m-bain/whisperX/issues">
<img src="https://img.shields.io/github/issues/m-bain/whisperx.svg"
alt="GitHub issues">
</a>
<a href="https://github.com/m-bain/whisperX/blob/master/LICENSE">
Expand All @@ -17,6 +17,15 @@
</a>
</p>

<p align="center">
<a href="#what-is-it">What is it</a> •
<a href="#setup">Setup</a> •
<a href="#example">Example usage</a>
</p>

<img width="1216" align="center" alt="whisperx-arch" src="https://user-images.githubusercontent.com/36994049/208313881-903ab3ea-4932-45fd-b3dc-70876cddaaa2.png">



<h6 align="center">Made by Max Bain • :globe_with_meridians: <a href="https://www.maxbain.com">https://www.maxbain.com</a></h6>

Expand All @@ -25,9 +34,9 @@
</p>


<h2 align="left">What is it 🔎</h2>
<h2 align="left", id="what-is-it">What is it 🔎</h2>

This repository refines the timestamps of openAI's Whisper model via forced aligment with phoneme-based ASR models (e.g. wav2vec2.0)
This repository refines the timestamps of openAI's Whisper model via forced aligment with phoneme-based ASR models (e.g. wav2vec2.0), multilingual use-case.


**Whisper** is an ASR model [developed by OpenAI](https://github.com/openai/whisper), trained on a large dataset of diverse audio. Whilst it does produces highly accurate transcriptions, the corresponding timestamps are at the utterance-level, not per word, and can be inaccurate by several seconds.
Expand All @@ -36,25 +45,25 @@ This repository refines the timestamps of openAI's Whisper model via forced alig

**Forced Alignment** refers to the process by which orthographic transcriptions are aligned to audio recordings to automatically generate phone level segmentation.

<img width="1216" align="center" alt="whisperx-arch" src="https://user-images.githubusercontent.com/36994049/208313881-903ab3ea-4932-45fd-b3dc-70876cddaaa2.png">


<h2 align="left">Setup ⚙️</h2>
<h2 align="left" id="setup">Setup ⚙️</h2>
Install this package using

`pip install git+https://github.com/m-bain/whisperx.git`

You may also need to install ffmpeg, rust etc. Follow openAI instructions here https://github.com/openai/whisper#setup.

<h2 align="left">Examples💬</h2>
<h2 align="left" id="example">Example usage💬</h2>

### English

Run whisper on example segment (using default params)

`whisperx examples/sample01.wav --model medium.en --output examples/whisperx --align_model WAV2VEC2_ASR_BASE_960H --align_extend 2`
whisperx examples/sample01.wav


For increased timestamp accuracy, at the cost of higher gpu mem, use a bigger alignment model e.g.

`WAV2VEC2_ASR_LARGE_LV60K_960H` or `HUBERT_ASR_XLARGE`
whisperx examples/sample01.wav --model medium.en --align_model WAV2VEC2_ASR_LARGE_LV60K_960H --output_dir examples/whisperx

Result using *WhisperX* with forced alignment to wav2vec2.0 large:

Expand All @@ -69,16 +78,15 @@ https://user-images.githubusercontent.com/36994049/207743923-b4f0d537-29ae-4be2-
For non-english ASR, it is best to use the `large` whisper model.

### French
`whisperx --model large --language fr examples/sample_fr_01.wav --align_model VOXPOPULI_ASR_BASE_10K_FR --output_dir examples/whisperx/ --align_extend 2`
whisperx examples/sample_fr_01.wav --model large --language fr --align_model VOXPOPULI_ASR_BASE_10K_FR --output_dir examples/whisperx


https://user-images.githubusercontent.com/36994049/208298804-31c49d6f-6787-444e-a53f-e93c52706752.mov



### German
`whisperx --model large --language de examples/sample_de_01.wav --align_model VOXPOPULI_ASR_BASE_10K_DE --output_dir examples/whisperx/ --align_extend 2`

whisperx examples/sample_de_01.wav --model large --language de --align_model VOXPOPULI_ASR_BASE_10K_DE --output_dir examples/whisperx


https://user-images.githubusercontent.com/36994049/208298811-e36002ba-3698-4731-97d4-0aebd07e0eb3.mov
Expand All @@ -87,47 +95,45 @@ https://user-images.githubusercontent.com/36994049/208298811-e36002ba-3698-4731-


### Italian
`whisperx --model large --language it examples/sample_it_01.wav --align_model VOXPOPULI_ASR_BASE_10K_IT --output_dir examples/whisperx/ --align_extend 2`
whisperx examples/sample_it_01.wav --model large --language it --align_model VOXPOPULI_ASR_BASE_10K_IT --output_dir examples/whisperx



https://user-images.githubusercontent.com/36994049/208298819-6f462b2c-8cae-4c54-b8e1-90855794efc7.mov



<h2 align="left">Limitations ⚠️</h2>
<h2 align="left" id="limitations">Limitations ⚠️</h2>

- Not thoroughly tested, especially for non-english, results may vary -- please post issue to let me know the results on your data
- Whisper normalises spoken numbers e.g. "fifty seven" to arabic numerals "57". Need to perform this normalization after alignment, so the phonemes can be aligned. Currently just ignores numbers.
- Assumes the initial whisper timestamps are accurate to some degree (within margin of 2 seconds, adjust if needed -- bigger margins more prone to alignment errors)
- Hacked this up quite quickly, there might be some errors, please raise an issue if you encounter any.

<h2 align="left">Coming Soon 🗓</h2>
<h2 align="left" id="coming-soon">Coming Soon 🗓</h2>

[x] Multilingual init

[x] Subtitle .ass output

[ ] Automatic align model selection based on language detection

[ ] Option to minimise gpu load (chunk wav2vec)

[ ] Incorporating word-level speaker diarization

[ ] Inference speedup with batch processing

<h2 align="left">Contact</h2>
<h2 align="left" id="contact">Contact</h2>

Contact maxbain[at]robots[dot]ox[dot]ac[dot]uk if using this commerically.


<h2 align="left">Acknowledgements 🙏</h2>
<h2 align="left" id="acks">Acknowledgements 🙏</h2>

Of course, this is mostly just a modification to [openAI's whisper](https://github.com/openai/whisper).
As well as accreditation to this [PyTorch tutorial on forced alignment](https://pytorch.org/tutorials/intermediate/forced_alignment_with_torchaudio_tutorial.html)


<h2 align="left">Citation</h2>
<h2 align="left" id="cite">Citation</h2>
If you use this in your research, just cite the repo,

```bibtex
Expand Down
2 changes: 1 addition & 1 deletion whisperx/transcribe.py
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ def transcribe(
compression_ratio_threshold: Optional[float] = 2.4,
logprob_threshold: Optional[float] = -1.0,
no_speech_threshold: Optional[float] = 0.6,
condition_on_previous_text: bool = True,
condition_on_previous_text: bool = False, # turn off by default due to errors it causes
**decode_options,
):
"""
Expand Down

0 comments on commit cbaeb85

Please sign in to comment.