Skip to content

Commit 015a2b1

Browse files
dbkinderashahba
andauthored
doc: fix markdown (#474)
* fix multiple H1 headings * remove unnecessary use of HTML * fix missing indents on ordered list content Signed-off-by: David B. Kinder <david.b.kinder@intel.com> Co-authored-by: Abolfazl Shahbazi <abolfazl.shahbazi@intel.com>
1 parent 33f8329 commit 015a2b1

File tree

1 file changed

+64
-64
lines changed

1 file changed

+64
-64
lines changed

ChatQnA/README.md

Lines changed: 64 additions & 64 deletions
Original file line numberDiff line numberDiff line change
@@ -12,82 +12,82 @@ ChatQnA is implemented on top of [GenAIComps](https://github.com/opea-project/Ge
1212

1313
![Flow Chart](./assets/img/chatqna_flow_chart.png)
1414

15-
This ChatQnA use case performs RAG using LangChain, Redis VectorDB and Text Generation Inference on Intel Gaudi2 or Intel XEON Scalable Processors. The Intel Gaudi2 accelerator supports both training and inference for deep learning models in particular for LLMs. Please visit [Habana AI products](https://habana.ai/products) for more details.
15+
This ChatQnA use case performs RAG using LangChain, Redis VectorDB and Text Generation Inference on Intel Gaudi2 or Intel XEON Scalable Processors. The Intel Gaudi2 accelerator supports both training and inference for deep learning models in particular for LLMs. Visit [Habana AI products](https://habana.ai/products) for more details.
1616

17-
# Deploy ChatQnA Service
17+
## Deploy ChatQnA Service
1818

1919
The ChatQnA service can be effortlessly deployed on either Intel Gaudi2 or Intel XEON Scalable Processors.
2020

2121
Currently we support two ways of deploying ChatQnA services with docker compose:
2222

2323
1. Start services using the docker image on `docker hub`:
2424

25-
```bash
26-
docker pull opea/chatqna:latest
27-
```
25+
```bash
26+
docker pull opea/chatqna:latest
27+
```
2828

29-
Two type of UI are supported now, choose one you like and pull the referred docker image.
29+
Two type of UI are supported now, choose one you like and pull the referred docker image.
3030

31-
If you choose conversational UI, follow the [instruction](https://github.com/opea-project/GenAIExamples/tree/main/ChatQnA/docker/gaudi#-launch-the-conversational-ui-optional) and modify the [compose.yaml](./docker/xeon/compose.yaml).
31+
If you choose conversational UI, follow the [instruction](https://github.com/opea-project/GenAIExamples/tree/main/ChatQnA/docker/gaudi#-launch-the-conversational-ui-optional) and modify the [compose.yaml](./docker/xeon/compose.yaml).
3232

33-
```bash
34-
docker pull opea/chatqna-ui:latest
35-
# or
36-
docker pull opea/chatqna-conversation-ui:latest
37-
```
33+
```bash
34+
docker pull opea/chatqna-ui:latest
35+
# or
36+
docker pull opea/chatqna-conversation-ui:latest
37+
```
3838

3939
2. Start services using the docker images `built from source`: [Guide](./docker)
4040

41-
## Setup Environment Variable
41+
### Setup Environment Variable
4242

4343
To set up environment variables for deploying ChatQnA services, follow these steps:
4444

4545
1. Set the required environment variables:
4646

47-
```bash
48-
# Example: host_ip="192.168.1.1"
49-
export host_ip="External_Public_IP"
50-
# Example: no_proxy="localhost, 127.0.0.1, 192.168.1.1"
51-
export no_proxy="Your_No_Proxy"
52-
export HUGGINGFACEHUB_API_TOKEN="Your_Huggingface_API_Token"
53-
```
47+
```bash
48+
# Example: host_ip="192.168.1.1"
49+
export host_ip="External_Public_IP"
50+
# Example: no_proxy="localhost, 127.0.0.1, 192.168.1.1"
51+
export no_proxy="Your_No_Proxy"
52+
export HUGGINGFACEHUB_API_TOKEN="Your_Huggingface_API_Token"
53+
```
5454

5555
2. If you are in a proxy environment, also set the proxy-related environment variables:
5656

57-
```bash
58-
export http_proxy="Your_HTTP_Proxy"
59-
export https_proxy="Your_HTTPs_Proxy"
60-
```
57+
```bash
58+
export http_proxy="Your_HTTP_Proxy"
59+
export https_proxy="Your_HTTPs_Proxy"
60+
```
6161

6262
3. Set up other environment variables:
6363

64-
> Notice that you can only choose <b>one</b> command below to set up envs according to your hardware. Other that the port numbers may be set incorrectly.
64+
> Notice that you can only choose **one** command below to set up envs according to your hardware. Other that the port numbers may be set incorrectly.
6565
66-
```bash
67-
# on Gaudi
68-
source ./docker/gaudi/set_env.sh
69-
# on Xeon
70-
source ./docker/xeon/set_env.sh
71-
# on Nvidia GPU
72-
source ./docker/gpu/set_env.sh
73-
```
66+
```bash
67+
# on Gaudi
68+
source ./docker/gaudi/set_env.sh
69+
# on Xeon
70+
source ./docker/xeon/set_env.sh
71+
# on Nvidia GPU
72+
source ./docker/gpu/set_env.sh
73+
```
7474

75-
## Deploy ChatQnA on Gaudi
75+
### Deploy ChatQnA on Gaudi
7676

77-
Please find corresponding [compose.yaml](./docker/gaudi/compose.yaml).
77+
Find the corresponding [compose.yaml](./docker/gaudi/compose.yaml).
7878

7979
```bash
8080
cd GenAIExamples/ChatQnA/docker/gaudi/
8181
docker compose up -d
8282
```
8383

84-
> Notice: Currently only the <b>Habana Driver 1.16.x</b> is supported for Gaudi.
84+
> Notice: Currently only the **Habana Driver 1.16.x** is supported for Gaudi.
8585
86-
Please refer to the [Gaudi Guide](./docker/gaudi/README.md) to build docker images from source.
86+
Refer to the [Gaudi Guide](./docker/gaudi/README.md) to build docker images from source.
8787

88-
## Deploy ChatQnA on Xeon
88+
### Deploy ChatQnA on Xeon
8989

90-
Please find corresponding [compose.yaml](./docker/xeon/compose.yaml).
90+
Find the corresponding [compose.yaml](./docker/xeon/compose.yaml).
9191

9292
```bash
9393
cd GenAIExamples/ChatQnA/docker/xeon/
@@ -96,7 +96,7 @@ docker compose up -d
9696

9797
Refer to the [Xeon Guide](./docker/xeon/README.md) for more instructions on building docker images from source.
9898

99-
## Deploy ChatQnA on NVIDIA GPU
99+
### Deploy ChatQnA on NVIDIA GPU
100100

101101
```bash
102102
cd GenAIExamples/ChatQnA/docker/gpu/
@@ -105,61 +105,61 @@ docker compose up -d
105105

106106
Refer to the [NVIDIA GPU Guide](./docker/gpu/README.md) for more instructions on building docker images from source.
107107

108-
## Deploy ChatQnA into Kubernetes on Xeon & Gaudi with GMC
108+
### Deploy ChatQnA into Kubernetes on Xeon & Gaudi with GMC
109109

110110
Refer to the [Kubernetes Guide](./kubernetes/README.md) for instructions on deploying ChatQnA into Kubernetes on Xeon & Gaudi with GMC.
111111

112-
## Deploy ChatQnA into Kubernetes on Xeon & Gaudi without GMC
112+
### Deploy ChatQnA into Kubernetes on Xeon & Gaudi without GMC
113113

114114
Refer to the [Kubernetes Guide](./kubernetes/manifests/README.md) for instructions on deploying ChatQnA into Kubernetes on Xeon & Gaudi without GMC.
115115

116-
## Deploy ChatQnA into Kubernetes using Helm Chart
116+
### Deploy ChatQnA into Kubernetes using Helm Chart
117117

118-
Install Helm (version >= 3.15) first. Please refer to the [Helm Installation Guide](https://helm.sh/docs/intro/install/) for more information.
118+
Install Helm (version >= 3.15) first. Refer to the [Helm Installation Guide](https://helm.sh/docs/intro/install/) for more information.
119119

120120
Refer to the [ChatQnA helm chart](https://github.com/opea-project/GenAIInfra/tree/main/helm-charts/chatqna) for instructions on deploying ChatQnA into Kubernetes on Xeon & Gaudi.
121121

122-
## Deploy ChatQnA on AI PC
122+
### Deploy ChatQnA on AI PC
123123

124124
Refer to the [AI PC Guide](./docker/aipc/README.md) for instructions on deploying ChatQnA on AI PC.
125125

126-
# Consume ChatQnA Service
126+
## Consume ChatQnA Service
127127

128128
Two ways of consuming ChatQnA Service:
129129

130130
1. Use cURL command on terminal
131131

132-
```bash
133-
curl http://${host_ip}:8888/v1/chatqna \
134-
-H "Content-Type: application/json" \
135-
-d '{
136-
"messages": "What is the revenue of Nike in 2023?"
137-
}'
138-
```
132+
```bash
133+
curl http://${host_ip}:8888/v1/chatqna \
134+
-H "Content-Type: application/json" \
135+
-d '{
136+
"messages": "What is the revenue of Nike in 2023?"
137+
}'
138+
```
139139

140140
2. Access via frontend
141141

142-
To access the frontend, open the following URL in your browser: `http://{host_ip}:5173`
142+
To access the frontend, open the following URL in your browser: `http://{host_ip}:5173`
143143

144-
By default, the UI runs on port 5173 internally.
144+
By default, the UI runs on port 5173 internally.
145145

146-
If you choose conversational UI, use this URL: `http://{host_ip}:5174`
146+
If you choose conversational UI, use this URL: `http://{host_ip}:5174`
147147

148148
# Troubleshooting
149149

150-
1. If you get errors like "Access Denied", please [validate micro service](https://github.com/opea-project/GenAIExamples/tree/main/ChatQnA/docker/xeon#validate-microservices) first. A simple example:
150+
1. If you get errors like "Access Denied", [validate micro service](https://github.com/opea-project/GenAIExamples/tree/main/ChatQnA/docker/xeon#validate-microservices) first. A simple example:
151151

152-
```bash
153-
http_proxy="" curl ${host_ip}:6006/embed -X POST -d '{"inputs":"What is Deep Learning?"}' -H 'Content-Type: application/json'
154-
```
152+
```bash
153+
http_proxy="" curl ${host_ip}:6006/embed -X POST -d '{"inputs":"What is Deep Learning?"}' -H 'Content-Type: application/json'
154+
```
155155

156-
2. (Docker only) If all microservices work well, please check the port ${host_ip}:8888, the port may be allocated by other users, you can modify the `compose.yaml`.
156+
2. (Docker only) If all microservices work well, check the port ${host_ip}:8888, the port may be allocated by other users, you can modify the `compose.yaml`.
157157

158-
3. (Docker only) If you get errors like "The container name is in use", please change container name in `compose.yaml`.
158+
3. (Docker only) If you get errors like "The container name is in use", change container name in `compose.yaml`.
159159

160-
# Monitoring OPEA Service with Prometheus and Grafana dashboard
160+
## Monitoring OPEA Service with Prometheus and Grafana dashboard
161161

162-
OPEA microservice deployment can easily be monitored through Grafana dashboards in conjunction with Prometheus data collection. Please follow the [README](https://github.com/opea-project/GenAIEval/blob/main/evals/benchmark/grafana/README.md) to setup Prometheus and Grafana servers and import dashboards to monitor the OPEA service.
162+
OPEA microservice deployment can easily be monitored through Grafana dashboards in conjunction with Prometheus data collection. Follow the [README](https://github.com/opea-project/GenAIEval/blob/main/evals/benchmark/grafana/README.md) to setup Prometheus and Grafana servers and import dashboards to monitor the OPEA service.
163163

164164
![chatqna dashboards](./assets/img/chatqna_dashboards.png)
165165
![tgi dashboard](./assets/img/tgi_dashboard.png)

0 commit comments

Comments
 (0)