Skip to content

Commit

Permalink
doc: fix heading levels in markdown content (#627)
Browse files Browse the repository at this point in the history
* only one H1 (#) heading for the title is allowed, so fix the extra H1
  headings (and the subheadings under those) to appropriate levels
* fix some inline code blocks containing leading/trailing spaces
* fix some indenting issues under an ordered list item

Signed-off-by: David B. Kinder <david.b.kinder@intel.com>
  • Loading branch information
dbkinder authored Sep 6, 2024
1 parent b2e64d2 commit a8a46bc
Show file tree
Hide file tree
Showing 17 changed files with 145 additions and 139 deletions.
20 changes: 10 additions & 10 deletions comps/agent/langchain/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -33,34 +33,34 @@ The tools are registered with a yaml file. We support the following types of too

Currently we have implemented OpenAI chat completion compatible API for agents. We are working to support OpenAI assistants APIs.

# 🚀2. Start Agent Microservice
## 🚀2. Start Agent Microservice

## 2.1 Option 1: with Python
### 2.1 Option 1: with Python

### 2.1.1 Install Requirements
#### 2.1.1 Install Requirements

```bash
cd comps/agent/langchain/
pip install -r requirements.txt
```

### 2.1.2 Start Microservice with Python Script
#### 2.1.2 Start Microservice with Python Script

```bash
cd comps/agent/langchain/
python agent.py
```

## 2.2 Option 2. Start Microservice with Docker
### 2.2 Option 2. Start Microservice with Docker

### 2.2.1 Build Microservices
#### 2.2.1 Build Microservices

```bash
cd GenAIComps/ # back to GenAIComps/ folder
docker build -t opea/comps-agent-langchain:latest -f comps/agent/langchain/docker/Dockerfile .
```

### 2.2.2 Start microservices
#### 2.2.2 Start microservices

```bash
export ip_address=$(hostname -I | awk '{print $1}')
Expand All @@ -87,7 +87,7 @@ docker logs comps-langchain-agent-endpoint
> docker run --rm --runtime=runc --name="comps-langchain-agent-endpoint" -v ./comps/agent/langchain/:/home/user/comps/agent/langchain/ -p 9090:9090 --ipc=host -e http_proxy=$http_proxy -e https_proxy=$https_proxy -e HUGGINGFACEHUB_API_TOKEN=${HUGGINGFACEHUB_API_TOKEN} -e model=${model} -e ip_address=${ip_address} -e strategy=react -e llm_endpoint_url=http://${ip_address}:8080 -e llm_engine=tgi -e recursion_limit=5 -e require_human_feedback=false -e tools=/home/user/comps/agent/langchain/tools/custom_tools.yaml opea/comps-agent-langchain:latest
> ```
# 🚀 3. Validate Microservice
## 🚀 3. Validate Microservice
Once microservice starts, user can use below script to invoke.
Expand All @@ -104,7 +104,7 @@ data: [DONE]
```
# 🚀 4. Provide your own tools
## 🚀 4. Provide your own tools

- Define tools

Expand Down Expand Up @@ -180,7 +180,7 @@ data: 'The weather information in Austin is not available from the Open Platform
data: [DONE]
```

# 5. Customize agent strategy
## 5. Customize agent strategy

For advanced developers who want to implement their own agent strategies, you can add a separate folder in `src\strategy`, implement your agent by inherit the `BaseAgent` class, and add your strategy into the `src\agent.py`. The architecture of this agent microservice is shown in the diagram below as a reference.
![Architecture Overview](agent_arch.jpg)
2 changes: 1 addition & 1 deletion comps/cores/telemetry/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ OPEA Comps currently provides telemetry functionalities for metrics and tracing

OPEA microservice metrics are exported in Prometheus format and are divided into two categories: general metrics and specific metrics.

General metrics, such as `http_requests_total `, `http_request_size_bytes`, are exposed by every microservice endpoint using the [prometheus-fastapi-instrumentator](https://github.com/trallnag/prometheus-fastapi-instrumentator).
General metrics, such as `http_requests_total`, `http_request_size_bytes`, are exposed by every microservice endpoint using the [prometheus-fastapi-instrumentator](https://github.com/trallnag/prometheus-fastapi-instrumentator).

Specific metrics are the built-in metrics exposed under `/metrics` by each specific microservices such as TGI, vLLM, TEI and others. Both types of the metrics adhere to the Prometheus format.

Expand Down
2 changes: 1 addition & 1 deletion comps/dataprep/redis/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -105,7 +105,7 @@ export HUGGINGFACEHUB_API_TOKEN=${your_hf_api_token}

- Build docker image with langchain

* option 1: Start single-process version (for 1-10 files processing)
- option 1: Start single-process version (for 1-10 files processing)

```bash
cd ../../../../
Expand Down
48 changes: 24 additions & 24 deletions comps/dataprep/redis/multimodal_langchain/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,9 +2,9 @@

This `dataprep` microservice accepts videos (mp4 files) and their transcripts (optional) from the user and ingests them into Redis vectorstore.

# 🚀1. Start Microservice with Python(Option 1)
## 🚀1. Start Microservice with Python(Option 1)

## 1.1 Install Requirements
### 1.1 Install Requirements

```bash
# Install ffmpeg static build
Expand All @@ -17,11 +17,11 @@ cp $(pwd)/ffmpeg-git-amd64-static/ffmpeg /usr/local/bin/
pip install -r requirements.txt
```

## 1.2 Start Redis Stack Server
### 1.2 Start Redis Stack Server

Please refer to this [readme](../../../vectorstores/langchain/redis/README.md).

## 1.3 Setup Environment Variables
### 1.3 Setup Environment Variables

```bash
export your_ip=$(hostname -I | awk '{print $1}')
Expand All @@ -30,7 +30,7 @@ export INDEX_NAME=${your_redis_index_name}
export PYTHONPATH=${path_to_comps}
```

## 1.4 Start LVM Microservice (Optional)
### 1.4 Start LVM Microservice (Optional)

This is required only if you are going to consume the _generate_captions_ API of this microservice as in [Section 4.3](#43-consume-generate_captions-api).

Expand All @@ -42,21 +42,21 @@ export your_ip=$(hostname -I | awk '{print $1}')
export LVM_ENDPOINT="http://${your_ip}:9399/v1/lvm"
```

## 1.5 Start Data Preparation Microservice for Redis with Python Script
### 1.5 Start Data Preparation Microservice for Redis with Python Script

Start document preparation microservice for Redis with below command.

```bash
python prepare_videodoc_redis.py
```

# 🚀2. Start Microservice with Docker (Option 2)
## 🚀2. Start Microservice with Docker (Option 2)

## 2.1 Start Redis Stack Server
### 2.1 Start Redis Stack Server

Please refer to this [readme](../../../vectorstores/langchain/redis/README.md).

## 2.2 Start LVM Microservice (Optional)
### 2.2 Start LVM Microservice (Optional)

This is required only if you are going to consume the _generate_captions_ API of this microservice as described [here](#43-consume-generate_captions-api).

Expand All @@ -68,7 +68,7 @@ export your_ip=$(hostname -I | awk '{print $1}')
export LVM_ENDPOINT="http://${your_ip}:9399/v1/lvm"
```

## 2.3 Setup Environment Variables
### 2.3 Setup Environment Variables

```bash
export your_ip=$(hostname -I | awk '{print $1}')
Expand All @@ -79,39 +79,39 @@ export INDEX_NAME=${your_redis_index_name}
export HUGGINGFACEHUB_API_TOKEN=${your_hf_api_token}
```

## 2.4 Build Docker Image
### 2.4 Build Docker Image

```bash
cd ../../../../
docker build -t opea/dataprep-redis:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f comps/dataprep/redis/multimodal_langchain/docker/Dockerfile .
```

## 2.5 Run Docker with CLI (Option A)
### 2.5 Run Docker with CLI (Option A)

```bash
docker run -d --name="dataprep-redis-server" -p 6007:6007 --runtime=runc --ipc=host -e http_proxy=$http_proxy -e https_proxy=$https_proxy -e REDIS_URL=$REDIS_URL -e INDEX_NAME=$INDEX_NAME -e LVM_ENDPOINT=$LVM_ENDPOINT -e HUGGINGFACEHUB_API_TOKEN=$HUGGINGFACEHUB_API_TOKEN opea/dataprep-redis:latest
```

## 2.6 Run with Docker Compose (Option B - deprecated, will move to genAIExample in future)
### 2.6 Run with Docker Compose (Option B - deprecated, will move to genAIExample in future)

```bash
cd comps/dataprep/redis/multimodal_langchain/docker
docker compose -f docker-compose-dataprep-redis.yaml up -d
```

# 🚀3. Status Microservice
## 🚀3. Status Microservice

```bash
docker container logs -f dataprep-redis-server
```

# 🚀4. Consume Microservice
## 🚀4. Consume Microservice

Once this dataprep microservice is started, user can use the below commands to invoke the microservice to convert videos and their transcripts (optional) to embeddings and save to the Redis vector store.

This mircroservice has provided 3 different ways for users to ingest videos into Redis vector store corresponding to the 3 use cases.

## 4.1 Consume _videos_with_transcripts_ API
### 4.1 Consume _videos_with_transcripts_ API

**Use case:** This API is used when a transcript file (under `.vtt` format) is available for each video.

Expand All @@ -120,7 +120,7 @@ This mircroservice has provided 3 different ways for users to ingest videos into
- Make sure the file paths after `files=@` are correct.
- Every transcript file's name must be identical with its corresponding video file's name (except their extension .vtt and .mp4). For example, `video1.mp4` and `video1.vtt`. Otherwise, if `video1.vtt` is not included correctly in this API call, this microservice will return error `No captions file video1.vtt found for video1.mp4`.

### Single video-transcript pair upload
#### Single video-transcript pair upload

```bash
curl -X POST \
Expand All @@ -130,7 +130,7 @@ curl -X POST \
http://localhost:6007/v1/videos_with_transcripts
```

### Multiple video-transcript pair upload
#### Multiple video-transcript pair upload

```bash
curl -X POST \
Expand All @@ -142,13 +142,13 @@ curl -X POST \
http://localhost:6007/v1/videos_with_transcripts
```

## 4.2 Consume _generate_transcripts_ API
### 4.2 Consume _generate_transcripts_ API

**Use case:** This API should be used when a video has meaningful audio or recognizable speech but its transcript file is not available.

In this use case, this microservice will use [`whisper`](https://openai.com/index/whisper/) model to generate the `.vtt` transcript for the video.

### Single video upload
#### Single video upload

```bash
curl -X POST \
Expand All @@ -157,7 +157,7 @@ curl -X POST \
http://localhost:6007/v1/generate_transcripts
```

### Multiple video upload
#### Multiple video upload

```bash
curl -X POST \
Expand All @@ -167,7 +167,7 @@ curl -X POST \
http://localhost:6007/v1/generate_transcripts
```

## 4.3 Consume _generate_captions_ API
### 4.3 Consume _generate_captions_ API

**Use case:** This API should be used when a video does not have meaningful audio or does not have audio.

Expand All @@ -192,7 +192,7 @@ curl -X POST \
http://localhost:6007/v1/generate_captions
```

## 4.4 Consume get_videos API
### 4.4 Consume get_videos API

To get names of uploaded videos, use the following command.

Expand All @@ -202,7 +202,7 @@ curl -X POST \
http://localhost:6007/v1/dataprep/get_videos
```

## 4.5 Consume delete_videos API
### 4.5 Consume delete_videos API

To delete uploaded videos and clear the database, use the following command.

Expand Down
12 changes: 7 additions & 5 deletions comps/embeddings/neural-speed/README.md
Original file line number Diff line number Diff line change
@@ -1,31 +1,33 @@
# build Mosec endpoint docker image
# Embedding Neural Speed

## build Mosec endpoint docker image

```
docker build --build-arg http_proxy=$http_proxy --build-arg https_proxy=$https_proxy -t langchain-mosec:neuralspeed -f comps/embeddings/neural-speed/neuralspeed-docker/Dockerfile .
```

# build embedding microservice docker image
## build embedding microservice docker image

```
docker build --build-arg http_proxy=$http_proxy --build-arg https_proxy=$https_proxy -t opea/embedding-langchain-mosec:neuralspeed -f comps/embeddings/neural-speed/docker/Dockerfile .
```

Note: Please contact us to request model files before building images.

# launch Mosec endpoint docker container
## launch Mosec endpoint docker container

```
docker run -d --name="embedding-langchain-mosec-endpoint" -p 6001:8000 langchain-mosec:neuralspeed
```

# launch embedding microservice docker container
## launch embedding microservice docker container

```
export MOSEC_EMBEDDING_ENDPOINT=http://{mosec_embedding_host_ip}:6001
docker run -d --name="embedding-langchain-mosec-server" -e http_proxy=$http_proxy -e https_proxy=$https_proxy -p 6000:6000 --ipc=host -e MOSEC_EMBEDDING_ENDPOINT=$MOSEC_EMBEDDING_ENDPOINT opea/embedding-langchain-mosec:neuralspeed
```

# run client test
## run client test

```
curl localhost:6000/v1/embeddings \
Expand Down
12 changes: 6 additions & 6 deletions comps/feedback_management/mongo/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -34,15 +34,15 @@ docker build -t opea/feedbackmanagement-mongo-server:latest --build-arg https_pr

1. Run mongoDB image

```bash
docker run -d -p 27017:27017 --name=mongo mongo:latest
```
```bash
docker run -d -p 27017:27017 --name=mongo mongo:latest
```

2. Run Feedback Management service

```bash
docker run -d --name="feedbackmanagement-mongo-server" -p 6016:6016 -e http_proxy=$http_proxy -e https_proxy=$https_proxy -e no_proxy=$no_proxy -e MONGO_HOST=${MONGO_HOST} -e MONGO_PORT=${MONGO_PORT} -e DB_NAME=${DB_NAME} -e COLLECTION_NAME=${COLLECTION_NAME} opea/feedbackmanagement-mongo-server:latest
```
```bash
docker run -d --name="feedbackmanagement-mongo-server" -p 6016:6016 -e http_proxy=$http_proxy -e https_proxy=$https_proxy -e no_proxy=$no_proxy -e MONGO_HOST=${MONGO_HOST} -e MONGO_PORT=${MONGO_PORT} -e DB_NAME=${DB_NAME} -e COLLECTION_NAME=${COLLECTION_NAME} opea/feedbackmanagement-mongo-server:latest
```

### Invoke Microservice

Expand Down
Loading

0 comments on commit a8a46bc

Please sign in to comment.