Create a conda environment with the following packages.
conda create -n whisper
pip install onnx
pip install torch
pip install transformers
pip install optimum[onnx]
pip install onnxruntime
pip install onnxruntime-extensions
pip install ort-nightly==1.16.0.dev20230701001 --extra-index-url https://aiinfra.pkgs.visualstudio.com/PublicPackages/_packaging/ORT-Nightly/pypi/simple/
Or install from the saved environment whisper.yml
Run from pytorch
Run from optimum
Install Olive from source:
python -m pip install git+https://github.com/microsoft/Olive.git
Configs for each of the model variants can be found in ./models
cd models/whisper-tiny
python prepare_whisper_configs.py --model_name openai/whisper-tiny.en
python -m olive.workflows.run --config whisper_cpu_int8.json --setup
python -m olive.workflows.run --config whisper_cpu_int8.json
In the models
folder, you will find a folder called CandidateModels/cpu-cpu/BestCandidateModel_1
, which contains the model model.onnx
. Copy the model into your application directory.
cd cpu
python transcribe.py
python -m onnxruntime.transformers.models.whisper.convert_to_onnx -m openai/whisper-base.en --output whisper -e
curl https://raw.githubusercontent.com/microsoft/onnxruntime-extensions/main/test/data/1272-141231-0002.mp3 > 1272-141231-0002.mp3
curl https://raw.githubusercontent.com/microsoft/onnxruntime-extensions/main/tutorials/whisper_e2e.py > whisper_e2e.py
python whisper_e2e.py -a 1272-141231-0002.mp3 -m whisper/openai/whisper-base.en_beamsearch.onnx
Produces
whisper-base.en_all.onnx.data