Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

substrate: provisioners beyond docker and podman #118

Open
ajbouh opened this issue Feb 9, 2024 · 0 comments
Open

substrate: provisioners beyond docker and podman #118

ajbouh opened this issue Feb 9, 2024 · 0 comments

Comments

@ajbouh
Copy link
Owner

ajbouh commented Feb 9, 2024

This is a place to take notes on what we need from a provisioner driver implementation.

Substrate uses provisioners to:

  • launch services from a string identifier (e.g. container image name)
  • bind mount additional folders at launch
  • check and watch service status (e.g. starting, running, exited)
  • stop services
  • configure network access for services (we don't yet have a very strong policy on this)

The string identifiers we use for services need to fully specify all files we need for a service. If at all possible, it should pin to a specific version.

Podman on Linux is our best-supported provisioner. Docker is almost as good, though we only test it on macOS.

Provisioner Backend OS Tested GPU access Service Primitive Image Definition Example Image Reference
Docker Linux container Dockerfile ghcr.io/ajbouh/substrate:substrate-faster-whisper
Podman Linux container Dockerfile ghcr.io/ajbouh/substrate:substrate-faster-whisper
Docker Desktop macOS container Dockerfile ghcr.io/ajbouh/substrate:substrate-faster-whisper

Unfortunately, Docker on macOS does not provide access to any accelerators. Given that Macs are some of the fastest machines available for machine learning inference, if we can easily support them then we'd like to.

These are the questions we have:

Provisioner Backend OS Tested GPU access Service Primitive Image Definition Example Image Reference
custom macOS ? ? ?

One possible option would be host processes running natively built and packaged with nixpkgs

Provisioner Backend OS Tested GPU access Service Primitive Image Definition Example Image Reference
custom macOS OS process nixexpr /nix/store/8m3wjb23sfbjpjsj4l82b4zh9xnw62hh-faster-whisper

There are still other questions to answer, but writing this up now as a starting point for if/when it matters

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant