You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This is a place to take notes on what we need from a provisioner driver implementation.
Substrate uses provisioners to:
launch services from a string identifier (e.g. container image name)
bind mount additional folders at launch
check and watch service status (e.g. starting, running, exited)
stop services
configure network access for services (we don't yet have a very strong policy on this)
The string identifiers we use for services need to fully specify all files we need for a service. If at all possible, it should pin to a specific version.
Podman on Linux is our best-supported provisioner. Docker is almost as good, though we only test it on macOS.
Provisioner Backend
OS
Tested
GPU access
Service Primitive
Image Definition
Example Image Reference
Docker
Linux
❌
✅
container
Dockerfile
ghcr.io/ajbouh/substrate:substrate-faster-whisper
Podman
Linux
✅
✅
container
Dockerfile
ghcr.io/ajbouh/substrate:substrate-faster-whisper
Docker Desktop
macOS
✅
❌
container
Dockerfile
ghcr.io/ajbouh/substrate:substrate-faster-whisper
Unfortunately, Docker on macOS does not provide access to any accelerators. Given that Macs are some of the fastest machines available for machine learning inference, if we can easily support them then we'd like to.
These are the questions we have:
Provisioner Backend
OS
Tested
GPU access
Service Primitive
Image Definition
Example Image Reference
custom
macOS
✅
✅
?
?
?
One possible option would be host processes running natively built and packaged with nixpkgs
This is a place to take notes on what we need from a provisioner driver implementation.
Substrate uses provisioners to:
The string identifiers we use for services need to fully specify all files we need for a service. If at all possible, it should pin to a specific version.
Podman on Linux is our best-supported provisioner. Docker is almost as good, though we only test it on macOS.
ghcr.io/ajbouh/substrate:substrate-faster-whisper
ghcr.io/ajbouh/substrate:substrate-faster-whisper
ghcr.io/ajbouh/substrate:substrate-faster-whisper
Unfortunately, Docker on macOS does not provide access to any accelerators. Given that Macs are some of the fastest machines available for machine learning inference, if we can easily support them then we'd like to.
These are the questions we have:
One possible option would be host processes running natively built and packaged with nixpkgs
/nix/store/8m3wjb23sfbjpjsj4l82b4zh9xnw62hh-faster-whisper
There are still other questions to answer, but writing this up now as a starting point for if/when it matters
The text was updated successfully, but these errors were encountered: