A completely offline voice assistant using Mistral 7b via Ollama and Whisper speech recognition models. This builds on the excellent work of maudoin by adding Mac compatibility with various improvements.
ollama-voice.mp4
- Install Ollama on your Mac.
- Download the Mistral 7b model using the
ollama pull mistral
command. - Download an OpenAI Whisper Model (base.en works fine).
- Clone this repo somewhere.
- Place the Whisper model in a /whisper directory in the repo root folder.
- Make sure you have Python and Pip installed.
- For Apple silicon support of the PyAudio library you'll need to install Homebrew and run
brew install portaudio
. - Run
pip install -r requirements.txt
to install. - Run
python assistant.py
to start the assistant.
You can improve the quality of the voice by downloading a higher quality version. These instructions work on MacOS 14 Sonoma:
- In System Settings select Accessibility > Spoken Content
- Select System Voice and Manage Voices...
- For English find "Zoe (Premium)" and download it.
- Select Zoe (Premium) as your System voice.
You can set up support for other languages by editing assistant.yaml
. Be sure to download a different Whisper model in your language and change the default modelPath
.