Skip to content

Commit

Permalink
using ghcr.io/huggingface/text-generation-inference:sha-e4201f4-intel…
Browse files Browse the repository at this point in the history
…-cpu as the tgi image on xeon.

Signed-off-by: zhlsunshine <huailong.zhang@intel.com>
  • Loading branch information
zhlsunshine committed Sep 19, 2024
1 parent c0190bb commit c1b24d8
Show file tree
Hide file tree
Showing 18 changed files with 21 additions and 21 deletions.
2 changes: 1 addition & 1 deletion AudioQnA/kubernetes/intel/cpu/xeon/manifest/audioqna.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -247,7 +247,7 @@ spec:
- envFrom:
- configMapRef:
name: audio-qna-config
image: ghcr.io/huggingface/text-generation-inference:2.2.0
image: "ghcr.io/huggingface/text-generation-inference:sha-e4201f4-intel-cpu"
name: llm-dependency-deploy-demo
securityContext:
capabilities:
Expand Down
4 changes: 2 additions & 2 deletions ChatQnA/docker_compose/intel/cpu/xeon/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -248,7 +248,7 @@ For users in China who are unable to download models directly from Huggingface,
export HF_TOKEN=${your_hf_token}
export HF_ENDPOINT="https://hf-mirror.com"
model_name="Intel/neural-chat-7b-v3-3"
docker run -p 8008:80 -v ./data:/data --name tgi-service -e HF_ENDPOINT=$HF_ENDPOINT -e http_proxy=$http_proxy -e https_proxy=$https_proxy --shm-size 1g ghcr.io/huggingface/text-generation-inference:2.2.0 --model-id $model_name
docker run -p 8008:80 -v ./data:/data --name tgi-service -e HF_ENDPOINT=$HF_ENDPOINT -e http_proxy=$http_proxy -e https_proxy=$https_proxy --shm-size 1g ghcr.io/huggingface/text-generation-inference:sha-e4201f4-intel-cpu --model-id $model_name
```
2. Offline
Expand All @@ -262,7 +262,7 @@ For users in China who are unable to download models directly from Huggingface,
```bash
export HF_TOKEN=${your_hf_token}
export model_path="/path/to/model"
docker run -p 8008:80 -v $model_path:/data --name tgi_service --shm-size 1g ghcr.io/huggingface/text-generation-inference:2.2.0 --model-id /data
docker run -p 8008:80 -v $model_path:/data --name tgi_service --shm-size 1g ghcr.io/huggingface/text-generation-inference:sha-e4201f4-intel-cpu --model-id /data
```
### Setup Environment Variables
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -1497,7 +1497,7 @@ spec:
runAsUser: 1000
seccompProfile:
type: RuntimeDefault
image: "ghcr.io/huggingface/text-generation-inference:2.2.0"
image: "ghcr.io/huggingface/text-generation-inference:sha-e4201f4-intel-cpu"
imagePullPolicy: IfNotPresent
volumeMounts:
- mountPath: /data
Expand Down Expand Up @@ -1577,7 +1577,7 @@ spec:
runAsUser: 1000
seccompProfile:
type: RuntimeDefault
image: "ghcr.io/huggingface/text-generation-inference:2.2.0"
image: "ghcr.io/huggingface/text-generation-inference:sha-e4201f4-intel-cpu"
imagePullPolicy: IfNotPresent
volumeMounts:
- mountPath: /data
Expand Down
2 changes: 1 addition & 1 deletion ChatQnA/kubernetes/intel/cpu/xeon/manifest/chatqna.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -1319,7 +1319,7 @@ spec:
runAsUser: 1000
seccompProfile:
type: RuntimeDefault
image: "ghcr.io/huggingface/text-generation-inference:2.2.0"
image: "ghcr.io/huggingface/text-generation-inference:sha-e4201f4-intel-cpu"
imagePullPolicy: IfNotPresent
volumeMounts:
- mountPath: /data
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -1322,7 +1322,7 @@ spec:
runAsUser: 1000
seccompProfile:
type: RuntimeDefault
image: "ghcr.io/huggingface/text-generation-inference:2.2.0"
image: "ghcr.io/huggingface/text-generation-inference:sha-e4201f4-intel-cpu"
imagePullPolicy: IfNotPresent
volumeMounts:
- mountPath: /data
Expand Down
2 changes: 1 addition & 1 deletion CodeGen/kubernetes/intel/cpu/xeon/manifest/codegen.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -404,7 +404,7 @@ spec:
runAsUser: 1000
seccompProfile:
type: RuntimeDefault
image: "ghcr.io/huggingface/text-generation-inference:2.2.0"
image: "ghcr.io/huggingface/text-generation-inference:sha-e4201f4-intel-cpu"
imagePullPolicy: IfNotPresent
volumeMounts:
- mountPath: /data
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -404,7 +404,7 @@ spec:
runAsUser: 1000
seccompProfile:
type: RuntimeDefault
image: "ghcr.io/huggingface/text-generation-inference:2.2.0"
image: "ghcr.io/huggingface/text-generation-inference:sha-e4201f4-intel-cpu"
imagePullPolicy: IfNotPresent
volumeMounts:
- mountPath: /data
Expand Down
2 changes: 1 addition & 1 deletion DocSum/kubernetes/intel/README_gmc.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ Install GMC in your Kubernetes cluster, if you have not already done so, by foll
The DocSum application is defined as a Custom Resource (CR) file that the above GMC operator acts upon. It first checks if the microservices listed in the CR yaml file are running, if not it starts them and then proceeds to connect them. When the DocSum RAG pipeline is ready, the service endpoint details are returned, letting you use the application. Should you use "kubectl get pods" commands you will see all the component microservices, in particular embedding, retriever, rerank, and llm.

The DocSum pipeline uses prebuilt images. The Xeon version uses the prebuilt image `llm-docsum-tgi:latest` which internally leverages the
the image `ghcr.io/huggingface/text-generation-inference:2.2.0`. The service is called tgi-svc. Meanwhile, the Gaudi version launches the
the image `ghcr.io/huggingface/text-generation-inference:sha-e4201f4-intel-cpu`. The service is called tgi-svc. Meanwhile, the Gaudi version launches the
service tgi-gaudi-svc, which uses the image `ghcr.io/huggingface/tgi-gaudi:2.0.1`. Both TGI model services serve the model specified in the LLM_MODEL_ID variable that is exported by you. In the below example we use `Intel/neural-chat-7b-v3-3`.

[NOTE]
Expand Down
2 changes: 1 addition & 1 deletion DocSum/kubernetes/intel/cpu/xeon/manifest/docsum.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -404,7 +404,7 @@ spec:
runAsUser: 1000
seccompProfile:
type: RuntimeDefault
image: "ghcr.io/huggingface/text-generation-inference:2.2.0"
image: "ghcr.io/huggingface/text-generation-inference:sha-e4201f4-intel-cpu"
imagePullPolicy: IfNotPresent
volumeMounts:
- mountPath: /data
Expand Down
4 changes: 2 additions & 2 deletions ProductivitySuite/docker_compose/intel/cpu/xeon/compose.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -113,7 +113,7 @@ services:
LANGCHAIN_PROJECT: "opea-reranking-service"
restart: unless-stopped
tgi_service:
image: ghcr.io/huggingface/text-generation-inference:2.1.0
image: ghcr.io/huggingface/text-generation-inference:sha-e4201f4-intel-cpu
container_name: tgi-service
ports:
- "9009:80"
Expand Down Expand Up @@ -174,7 +174,7 @@ services:
ipc: host
restart: always
tgi_service_codegen:
image: ghcr.io/huggingface/text-generation-inference:2.1.0
image: ghcr.io/huggingface/text-generation-inference:sha-e4201f4-intel-cpu
container_name: tgi_service_codegen
ports:
- "8028:80"
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -993,7 +993,7 @@ spec:
name: chatqna-tgi-config
securityContext:
{}
image: "ghcr.io/huggingface/text-generation-inference:2.2.0"
image: "ghcr.io/huggingface/text-generation-inference:sha-e4201f4-intel-cpu"
imagePullPolicy: IfNotPresent
volumeMounts:
- mountPath: /data
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -229,7 +229,7 @@ spec:
name: codegen-tgi-config
securityContext:
{}
image: "ghcr.io/huggingface/text-generation-inference:2.2.0"
image: "ghcr.io/huggingface/text-generation-inference:sha-e4201f4-intel-cpu"
imagePullPolicy: IfNotPresent
volumeMounts:
- mountPath: /data
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -229,7 +229,7 @@ spec:
name: docsum-tgi-config
securityContext:
{}
image: "ghcr.io/huggingface/text-generation-inference:2.2.0"
image: "ghcr.io/huggingface/text-generation-inference:sha-e4201f4-intel-cpu"
imagePullPolicy: IfNotPresent
volumeMounts:
- mountPath: /data
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -138,7 +138,7 @@ spec:
- configMapRef:
name: faqgen-tgi-config
securityContext: {}
image: "ghcr.io/huggingface/text-generation-inference:2.2.0"
image: "ghcr.io/huggingface/text-generation-inference:sha-e4201f4-intel-cpu"
imagePullPolicy: IfNotPresent
volumeMounts:
- mountPath: /data
Expand Down
2 changes: 1 addition & 1 deletion ProductivitySuite/tests/test_compose_on_xeon.sh
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ function build_docker_images() {
docker compose -f build.yaml build --no-cache > ${LOG_PATH}/docker_image_build.log

docker pull ghcr.io/huggingface/text-embeddings-inference:cpu-1.5
docker pull ghcr.io/huggingface/text-generation-inference:2.1.0
docker pull ghcr.io/huggingface/text-generation-inference:sha-e4201f4-intel-cpu
docker images && sleep 1s
}

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -361,7 +361,7 @@ spec:
runAsUser: 1000
seccompProfile:
type: RuntimeDefault
image: "ghcr.io/huggingface/text-generation-inference:2.2.0"
image: "ghcr.io/huggingface/text-generation-inference:sha-e4201f4-intel-cpu"
imagePullPolicy: IfNotPresent
volumeMounts:
- mountPath: /data
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -216,7 +216,7 @@ spec:
name: visualqna-tgi-config
securityContext:
{}
image: "ghcr.io/huggingface/text-generation-inference:2.2.0"
image: "ghcr.io/huggingface/text-generation-inference:sha-e4201f4-intel-cpu"
imagePullPolicy: IfNotPresent
volumeMounts:
- mountPath: /data
Expand Down
2 changes: 1 addition & 1 deletion VisualQnA/tests/test_compose_on_xeon.sh
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ function build_docker_images() {
echo "Build all the images with --no-cache, check docker_image_build.log for details..."
docker compose -f build.yaml build --no-cache > ${LOG_PATH}/docker_image_build.log

docker pull ghcr.io/huggingface/text-generation-inference:2.2.0
docker pull ghcr.io/huggingface/text-generation-inference:sha-e4201f4-intel-cpu
docker images && sleep 1s
}

Expand Down

0 comments on commit c1b24d8

Please sign in to comment.