We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
docker run -v /llama/models:/models ghcr.io/ggerganov/llama.cpp:full --all-in-one "/models/" 65B
Unable to find image 'ghcr.io/ggerganov/llama.cpp:full' locally full: Pulling from ggerganov/llama.cpp 2ab09b027e7f: Pull complete abc582ff34c3: Pull complete 474c54188cc5: Pull complete 90dde168a635: Pull complete 4baa98a3bbd6: Pull complete 40709b48f1dd: Pull complete Digest: sha256:0e26a42b34ad42f285a4327fbe099674137b119e6efea07345a7c17ab8a4b13e Status: Downloaded newer image for ghcr.io/ggerganov/llama.cpp:full Downloading model... Traceback (most recent call last): File "/app/./download-pth.py", line 3, in <module> from tqdm import tqdm ModuleNotFoundError: No module named 'tqdm'
The text was updated successfully, but these errors were encountered:
Looks like it should be added here:
https://github.com/ggerganov/llama.cpp/blob/da5303c1ea68aa19db829c634f1e10d08d409680/.devops/full.Dockerfile#L9
also see #308 ?
Sorry, something went wrong.
Duplicate of #289, fixed in master
Merge pull request ggml-org#310 from gjmulder/auto-docker
3977eea
Auto docker v2 - dockerised Open Llama 3B image w/OpenBLAS enabled server
No branches or pull requests
The text was updated successfully, but these errors were encountered: