Skip to content

Latest commit

 

History

History
98 lines (57 loc) · 3.41 KB

install.md

File metadata and controls

98 lines (57 loc) · 3.41 KB

Install notes 📝

System Requirements

Windows: Version 8 or greater.

macOS: Version 13.3 or greater.

Linux: Tested on ubuntu-22.04+

Hardware: No special requirement. resource usage can be customized through advanced settings in main window.

Currenly, listening for the audio file isn't supported on Linux

In addition you may need to set this environment variable before start it

export WEBKIT_DISABLE_COMPOSITING_MODE=1

Setting Up Summarization with Ollama

  1. Install Ollama

Download and install Ollama from https://ollama.com.

  1. Install a Model

Once installed, set up a model for summarization. For example, you can install llama3.1 by running the following command in your terminal:

ollama run llama3.1
  1. Enable Summarization

After the model is installed, open the Ollama app. Navigate to More Options and enable Summarize just before the transcription step. You can leave the settings at their default values.

Make sure to run the 'Run check` to see that it works

That's it! Summarization will now be active in Ollama.

Manual Install 🛠️

MacOS Apple silicon: install aarch64.dmg file from releases Don't forget to right click and open from Applications once

MacOS Intel: install x64.dmg file from releases Don't forget to right click and open from Applications once

Windows: install .exe file from releases

Linux: install .deb from releases (Arch users can use debtap)

All models available for manual install. see Pre built models

Offline Setup 💾

Offline installation with Vibe is easy: open the app, cancel the download, and navigate to the Customize section within settings.

All models available for manual install. see settings or Pre built models

Faster transcriptions on macOS (2-3x) 🌟

  1. Download the matching .mlcmodelc.zip for your model from https://huggingface.co/ggerganov/whisper.cpp/tree/main
  • e.g. ggml-medium-encoder.mlmodelc.zip matches ggml-medium-encoder.bin
  1. Open models path from Vibe settings
  2. Drag and drop the .mlcmodel.c file into the models folder so that it is alongside the .bin file
  3. Transcribe a file, the first time you use the model it will take longer as it is compiling the model. Every subsequent time it will be faster.

Error of msvc140.dll not found ❌

Download and install vc_redist.x64.exe

Special link to download models in vibe

You can add links to your websites for letting users download your models easily from your website directly to vibe.

The URL should be like

vibe://download/?url=https://huggingface.co/ggerganov/whisper.cpp/resolve/main/ggml-tiny.bin?download=true

Usage on linux server

To use Vibe on linux server you need to install fake display

sudo apt-get install xvfb -y
Xvfb :1 -screen 0 1024x768x24 &
export DISPLAY=1

wget https://github.com/thewh1teagle/vibe/releases/download/v0.0.1/ggml-medium.bin
wget https://github.com/thewh1teagle/vibe/raw/main/samples/single.wav
vibe --model ggml-medium.bin --file single.wav