Skip to content

Commit

Permalink
fix tgi xeon tag (#641)
Browse files Browse the repository at this point in the history
  • Loading branch information
Spycsh authored Aug 21, 2024
1 parent 67df280 commit 6674832
Show file tree
Hide file tree
Showing 22 changed files with 23 additions and 23 deletions.
2 changes: 1 addition & 1 deletion AudioQnA/docker/xeon/compose.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -41,7 +41,7 @@ services:
environment:
TTS_ENDPOINT: ${TTS_ENDPOINT}
tgi-service:
image: ghcr.io/huggingface/text-generation-inference:latest-intel-cpu
image: ghcr.io/huggingface/text-generation-inference:sha-e4201f4-intel-cpu
container_name: tgi-service
ports:
- "3006:80"
Expand Down
2 changes: 1 addition & 1 deletion ChatQnA/docker/xeon/compose.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -102,7 +102,7 @@ services:
HF_HUB_ENABLE_HF_TRANSFER: 0
restart: unless-stopped
tgi-service:
image: ghcr.io/huggingface/text-generation-inference:latest-intel-cpu
image: ghcr.io/huggingface/text-generation-inference:sha-e4201f4-intel-cpu
container_name: tgi-service
ports:
- "9009:80"
Expand Down
2 changes: 1 addition & 1 deletion ChatQnA/docker/xeon/compose_qdrant.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -102,7 +102,7 @@ services:
HF_HUB_ENABLE_HF_TRANSFER: 0
restart: unless-stopped
tgi-service:
image: ghcr.io/huggingface/text-generation-inference:latest-intel-cpu
image: ghcr.io/huggingface/text-generation-inference:sha-e4201f4-intel-cpu
container_name: tgi-service
ports:
- "6042:80"
Expand Down
2 changes: 1 addition & 1 deletion ChatQnA/kubernetes/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ The ChatQnA uses the below prebuilt images if you choose a Xeon deployment
- retriever: opea/retriever-redis:latest
- tei_xeon_service: ghcr.io/huggingface/text-embeddings-inference:cpu-1.5
- reranking: opea/reranking-tei:latest
- tgi-service: ghcr.io/huggingface/text-generation-inference:latest-intel-cpu
- tgi-service: ghcr.io/huggingface/text-generation-inference:sha-e4201f4-intel-cpu
- llm: opea/llm-tgi:latest
- chaqna-xeon-backend-server: opea/chatqna:latest

Expand Down
2 changes: 1 addition & 1 deletion ChatQnA/kubernetes/manifests/xeon/chatqna.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -1121,7 +1121,7 @@ spec:
name: chatqna-tgi-config
securityContext:
{}
image: "ghcr.io/huggingface/text-generation-inference:latest-intel-cpu"
image: "ghcr.io/huggingface/text-generation-inference:sha-e4201f4-intel-cpu"
imagePullPolicy: IfNotPresent
volumeMounts:
- mountPath: /data
Expand Down
2 changes: 1 addition & 1 deletion CodeGen/docker/xeon/compose.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@

services:
tgi-service:
image: ghcr.io/huggingface/text-generation-inference:latest-intel-cpu
image: ghcr.io/huggingface/text-generation-inference:sha-e4201f4-intel-cpu
container_name: tgi-service
ports:
- "8028:80"
Expand Down
2 changes: 1 addition & 1 deletion CodeGen/kubernetes/manifests/xeon/codegen.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -239,7 +239,7 @@ spec:
name: codegen-tgi-config
securityContext:
{}
image: "ghcr.io/huggingface/text-generation-inference:latest-intel-cpu"
image: "ghcr.io/huggingface/text-generation-inference:sha-e4201f4-intel-cpu"
imagePullPolicy: IfNotPresent
volumeMounts:
- mountPath: /data
Expand Down
2 changes: 1 addition & 1 deletion CodeGen/kubernetes/manifests/xeon/ui/react-codegen.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -126,7 +126,7 @@ spec:
- name: no_proxy
value:
securityContext: {}
image: "ghcr.io/huggingface/text-generation-inference:latest-intel-cpu"
image: "ghcr.io/huggingface/text-generation-inference:sha-e4201f4-intel-cpu"
imagePullPolicy: IfNotPresent
volumeMounts:
- mountPath: /data
Expand Down
2 changes: 1 addition & 1 deletion CodeGen/tests/test_codegen_on_xeon.sh
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ function build_docker_images() {
service_list="codegen codegen-ui llm-tgi"
docker compose -f docker_build_compose.yaml build ${service_list} --no-cache > ${LOG_PATH}/docker_image_build.log

docker pull ghcr.io/huggingface/text-generation-inference:latest-intel-cpu
docker pull ghcr.io/huggingface/text-generation-inference:sha-e4201f4-intel-cpu
docker images
}

Expand Down
2 changes: 1 addition & 1 deletion CodeTrans/docker/xeon/compose.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@

services:
tgi-service:
image: ghcr.io/huggingface/text-generation-inference:latest-intel-cpu
image: ghcr.io/huggingface/text-generation-inference:sha-e4201f4-intel-cpu
container_name: codetrans-tgi-service
ports:
- "8008:80"
Expand Down
2 changes: 1 addition & 1 deletion CodeTrans/kubernetes/manifests/xeon/codetrans.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -239,7 +239,7 @@ spec:
name: codetrans-tgi-config
securityContext:
{}
image: "ghcr.io/huggingface/text-generation-inference:latest-intel-cpu"
image: "ghcr.io/huggingface/text-generation-inference:sha-e4201f4-intel-cpu"
imagePullPolicy: IfNotPresent
volumeMounts:
- mountPath: /data
Expand Down
2 changes: 1 addition & 1 deletion DocSum/docker/xeon/compose.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@

services:
tgi-service:
image: ghcr.io/huggingface/text-generation-inference:latest-intel-cpu
image: ghcr.io/huggingface/text-generation-inference:sha-e4201f4-intel-cpu
container_name: tgi-service
ports:
- "8008:80"
Expand Down
2 changes: 1 addition & 1 deletion DocSum/kubernetes/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ Install GMC in your Kubernetes cluster, if you have not already done so, by foll
The DocSum application is defined as a Custom Resource (CR) file that the above GMC operator acts upon. It first checks if the microservices listed in the CR yaml file are running, if not it starts them and then proceeds to connect them. When the DocSum RAG pipeline is ready, the service endpoint details are returned, letting you use the application. Should you use "kubectl get pods" commands you will see all the component microservices, in particular embedding, retriever, rerank, and llm.

The DocSum pipeline uses prebuilt images. The Xeon version uses the prebuilt image llm-docsum-tgi:latest which internally leverages the
the image ghcr.io/huggingface/text-generation-inference:latest-intel-cpu. The service is called tgi-svc. Meanwhile, the Gaudi version launches the
the image ghcr.io/huggingface/text-generation-inference:sha-e4201f4-intel-cpu. The service is called tgi-svc. Meanwhile, the Gaudi version launches the
service tgi-gaudi-svc, which uses the image ghcr.io/huggingface/tgi-gaudi:1.2.1. Both TGI model services serve the model specified in the LLM_MODEL_ID variable that is exported by you. In the below example we use Intel/neural-chat-7b-v3-3.

[NOTE]
Expand Down
2 changes: 1 addition & 1 deletion DocSum/kubernetes/manifests/xeon/docsum.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -239,7 +239,7 @@ spec:
name: docsum-tgi-config
securityContext:
{}
image: "ghcr.io/huggingface/text-generation-inference:latest-intel-cpu"
image: "ghcr.io/huggingface/text-generation-inference:sha-e4201f4-intel-cpu"
imagePullPolicy: IfNotPresent
volumeMounts:
- mountPath: /data
Expand Down
2 changes: 1 addition & 1 deletion DocSum/kubernetes/manifests/xeon/ui/react-docsum.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -126,7 +126,7 @@ spec:
- name: no_proxy
value:
securityContext: {}
image: "ghcr.io/huggingface/text-generation-inference:latest-intel-cpu"
image: "ghcr.io/huggingface/text-generation-inference:sha-e4201f4-intel-cpu"
imagePullPolicy: IfNotPresent
volumeMounts:
- mountPath: /data
Expand Down
2 changes: 1 addition & 1 deletion FaqGen/docker/xeon/compose.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@

services:
tgi-service:
image: ghcr.io/huggingface/text-generation-inference:latest-intel-cpu
image: ghcr.io/huggingface/text-generation-inference:sha-e4201f4-intel-cpu
container_name: tgi-xeon-server
ports:
- "8008:80"
Expand Down
2 changes: 1 addition & 1 deletion FaqGen/kubernetes/manifests/xeon/ui/react-faqgen.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -126,7 +126,7 @@ spec:
- name: no_proxy
value:
securityContext: {}
image: "ghcr.io/huggingface/text-generation-inference:latest-intel-cpu"
image: "ghcr.io/huggingface/text-generation-inference:sha-e4201f4-intel-cpu"
imagePullPolicy: IfNotPresent
volumeMounts:
- mountPath: /data
Expand Down
2 changes: 1 addition & 1 deletion SearchQnA/docker/xeon/compose.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -73,7 +73,7 @@ services:
HUGGINGFACEHUB_API_TOKEN: ${HUGGINGFACEHUB_API_TOKEN}
restart: unless-stopped
tgi-service:
image: ghcr.io/huggingface/text-generation-inference:latest-intel-cpu
image: ghcr.io/huggingface/text-generation-inference:sha-e4201f4-intel-cpu
container_name: tgi-service
ports:
- "3006:80"
Expand Down
2 changes: 1 addition & 1 deletion SearchQnA/tests/test_searchqna_on_xeon.sh
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ function build_docker_images() {
docker compose -f docker_build_compose.yaml build ${service_list} --no-cache > ${LOG_PATH}/docker_image_build.log

docker pull ghcr.io/huggingface/text-embeddings-inference:cpu-1.5
docker pull ghcr.io/huggingface/text-generation-inference:latest-intel-cpu
docker pull ghcr.io/huggingface/text-generation-inference:sha-e4201f4-intel-cpu
docker images
}

Expand Down
2 changes: 1 addition & 1 deletion Translation/docker/xeon/compose.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@

services:
tgi-service:
image: ghcr.io/huggingface/text-generation-inference:latest-intel-cpu
image: ghcr.io/huggingface/text-generation-inference:sha-e4201f4-intel-cpu
container_name: tgi-service
ports:
- "8008:80"
Expand Down
4 changes: 2 additions & 2 deletions VisualQnA/docker/xeon/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -71,12 +71,12 @@ cd ../../../..
### 4. Pull TGI Xeon Image

```bash
docker pull ghcr.io/huggingface/text-generation-inference:latest-intel-cpu
docker pull ghcr.io/huggingface/text-generation-inference:sha-e4201f4-intel-cpu
```

Then run the command `docker images`, you will have the following 4 Docker Images:

1. `ghcr.io/huggingface/text-generation-inference:latest-intel-cpu`
1. `ghcr.io/huggingface/text-generation-inference:sha-e4201f4-intel-cpu`
2. `opea/lvm-tgi:latest`
3. `opea/visualqna:latest`
4. `opea/visualqna-ui:latest`
Expand Down
2 changes: 1 addition & 1 deletion VisualQnA/docker/xeon/compose.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@

services:
llava-tgi-service:
image: ghcr.io/huggingface/text-generation-inference:latest-intel-cpu
image: ghcr.io/huggingface/text-generation-inference:sha-e4201f4-intel-cpu
container_name: tgi-llava-xeon-server
ports:
- "9399:80"
Expand Down

0 comments on commit 6674832

Please sign in to comment.