This takes the wyoming-faster-whisper and wraps it into an nvidia cuda supported container.
Note This is only supported on x86_64 systems, yet.
- nvidia cuda compatible gpu
- nvidia linux drivers installed on the host
- up and running docker installation on the host
-
download this repo
$ git clone https://github.com/mib1185/wyoming-faster-whisper-cuda.git
-
compose.yaml
filecreate a
compose.yaml
file, which:- builds from the local
Dockerfile
- adds the needed parameters for
model
andlanguage
as command line parameter - (optional) enables
debug
logging via command line parameter - provides a
data
volume or directory - exposes the port
10300/tcp
- maps your nvidia gpu related devices into the container (obtain with
ls -la /dev/nvidia*
) - (optional) set
restart: always
example
compose.yaml
filename: wyoming services: faster-whisper-cuda: container_name: faster-whisper-cuda build: . command: "--model large --language de --debug" volumes: - ./data:/data ports: - 10300:10300/tcp devices: - /dev/nvidia-uvm:/dev/nvidia-uvm - /dev/nvidia-uvm-tools:/dev/nvidia-uvm-tools - /dev/nvidia0:/dev/nvidia0 - /dev/nvidiactl:/dev/nvidiactl restart: always
- builds from the local
-
start service
on first start, the docker image is build, which needs some time
$ docker compose up -d
-
check if service is running
$ docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 474e37a84326 wyoming-faster-whisper-cuda "/run.sh --model lar…" 3 minutes ago Up 3 minutes 0.0.0.0:10300->10300/tcp, :::10300->10300/tcp faster-whisper-cuda