You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Then run the command `docker images`, you will have the following 7 Docker Images:
@@ -188,15 +181,15 @@ By default, the embedding, reranking and LLM models are set to a default value a
188
181
189
182
Change the `xxx_MODEL_ID` below for your needs.
190
183
191
-
For customers with proxy issues, the models from [ModelScope](https://www.modelscope.cn/models)are also supported in ChatQnA with TGI serving. ModelScope models are supported in two ways for TGI:
184
+
For users in China who are unable to download models directly from Huggingface, you can use [ModelScope](https://www.modelscope.cn/models)or a Huggingface mirror to download models. TGI can load the models either online or offline as described below:
Since a TEI Gaudi Docker image hasn't been published, we'll need to build it from the [tei-gaudi](https://github.com/huggingface/tei-gaudi) repository.
77
+
To fortify AI initiatives in production, Guardrails microservice can secure model inputs and outputs, building Trustworthy, Safe, and Secure LLM-based Applications.
To fortify AI initiatives in production, Guardrails microservice can secure model inputs and outputs, building Trustworthy, Safe, and Secure LLM-based Applications.
Then run the command `docker images`, you will have the following 8 Docker Images:
135
+
Then run the command `docker images`, you will have the following 7 Docker Images:
159
136
160
137
-`opea/embedding-tei:latest`
161
138
-`opea/retriever-redis:latest`
162
139
-`opea/reranking-tei:latest`
163
140
-`opea/llm-tgi:latest` or `opea/llm-vllm:latest` or `opea/llm-vllm-ray:latest`
164
-
-`opea/tei-gaudi:latest`
165
141
-`opea/dataprep-redis:latest`
166
142
-`opea/chatqna:latest` or `opea/chatqna-guardrails:latest` or `opea/chatqna-without-rerank:latest`
167
143
-`opea/chatqna-ui:latest`
@@ -188,15 +164,15 @@ By default, the embedding, reranking and LLM models are set to a default value a
188
164
189
165
Change the `xxx_MODEL_ID` below for your needs.
190
166
191
-
For customers with proxy issues, the models from [ModelScope](https://www.modelscope.cn/models)are also supported in ChatQnA with TGI serving. ModelScope models are supported in two ways for TGI:
167
+
For users in China who are unable to download models directly from Huggingface, you can use [ModelScope](https://www.modelscope.cn/models)or a Huggingface mirror to download models. TGI can load the models either online or offline as described below:
To construct the Mega Service, we utilize the [GenAIComps](https://github.com/opea-project/GenAIComps.git) microservice pipeline within the `codegen.py` Python script. Build MegaService Docker image via the command below:
To construct the Mega Service, we utilize the [GenAIComps](https://github.com/opea-project/GenAIComps.git) microservice pipeline within the `codegen.py` Python script. Build the MegaService Docker image via the command below:
Copy file name to clipboardExpand all lines: CodeTrans/docker_compose/intel/cpu/xeon/README.md
+4-9Lines changed: 4 additions & 9 deletions
Original file line number
Diff line number
Diff line change
@@ -14,35 +14,30 @@ After launching your instance, you can connect to it using SSH (for Linux instan
14
14
15
15
First of all, you need to build Docker Images locally and install the python package of it. This step can be ignored after the Docker images published to Docker hub.
0 commit comments