-
Notifications
You must be signed in to change notification settings - Fork 145
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug] vLLM images missing from Docker Hub #961
Comments
Note: alternatively |
”opea/llm-docsum-vllm“ image will be added in version 1.1 and it needs OSPDT now. |
Both FaqGen & DocSum image variants have the same difference to (already existing) text-generation vLLM langchain wrapper image deps:
And similar difference in what
And all of them install the problematic
|
@eero-t Could we close this issue if it was fixed? |
docker pull opea/llm-docsum-vllm is OK now. |
I can verify image is in DockerHub. Unfortunately GenAIInfra CI still fails to pull it... |
Priority
Undecided
OS type
Ubuntu
Hardware type
Gaudi2
Installation method
Deploy method
Running nodes
Single Node
What's the version?
latest
from DockerHub, withAlways
pull policy.Description
DocSum and FaqGen images are missing for vLLM: https://hub.docker.com/u/opea?page=1&search=-vllm
Although they are there for TGI: https://hub.docker.com/u/opea?page=1&search=-tgi
And there are vLLM Dockerfiles for them:
And ChatQnA (text generation) vLLM image is already there: https://hub.docker.com/r/opea/llm-vllm
Because of this, CI fails for: opea-project/GenAIInfra#610
Reproduce steps
docker pull opea/llm-docsum-vllm
Raw log
No response
Attachments
No response
The text was updated successfully, but these errors were encountered: