Skip to content

Commit

Permalink
Fix Xeon reference per its trademark (#803)
Browse files Browse the repository at this point in the history
Signed-off-by: Malini Bhandaru <malini.bhandaru@intel.com>
  • Loading branch information
mkbhanda authored Sep 13, 2024
1 parent 558ea3b commit e1b8ce0
Show file tree
Hide file tree
Showing 8 changed files with 11 additions and 11 deletions.
2 changes: 1 addition & 1 deletion AudioQnA/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ AudioQnA is an example that demonstrates the integration of Generative AI (GenAI

## Deploy AudioQnA Service

The AudioQnA service can be deployed on either Intel Gaudi2 or Intel XEON Scalable Processor.
The AudioQnA service can be deployed on either Intel Gaudi2 or Intel Xeon Scalable Processor.

### Deploy AudioQnA on Gaudi

Expand Down
4 changes: 2 additions & 2 deletions ChatQnA/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -95,7 +95,7 @@ flowchart LR
```

This ChatQnA use case performs RAG using LangChain, Redis VectorDB and Text Generation Inference on Intel Gaudi2 or Intel XEON Scalable Processors. The Intel Gaudi2 accelerator supports both training and inference for deep learning models in particular for LLMs. Visit [Habana AI products](https://habana.ai/products) for more details.
This ChatQnA use case performs RAG using LangChain, Redis VectorDB and Text Generation Inference on Intel Gaudi2 or Intel Xeon Scalable Processors. The Intel Gaudi2 accelerator supports both training and inference for deep learning models in particular for LLMs. Visit [Habana AI products](https://habana.ai/products) for more details.

In the below, we provide a table that describes for each microservice component in the ChatQnA architecture, the default configuration of the open source project, hardware, port, and endpoint.

Expand All @@ -114,7 +114,7 @@ In the below, we provide a table that describes for each microservice component

## Deploy ChatQnA Service

The ChatQnA service can be effortlessly deployed on either Intel Gaudi2 or Intel XEON Scalable Processors.
The ChatQnA service can be effortlessly deployed on either Intel Gaudi2 or Intel Xeon Scalable Processors.

Two types of ChatQnA pipeline are supported now: `ChatQnA with/without Rerank`. And the `ChatQnA without Rerank` pipeline (including Embedding, Retrieval, and LLM) is offered for Xeon customers who can not run rerank service on HPU yet require high performance and accuracy.

Expand Down
2 changes: 1 addition & 1 deletion DocSum/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ The architecture for document summarization will be illustrated/described below:

## Deploy Document Summarization Service

The Document Summarization service can be effortlessly deployed on either Intel Gaudi2 or Intel XEON Scalable Processors.
The Document Summarization service can be effortlessly deployed on either Intel Gaudi2 or Intel Xeon Scalable Processors.
Based on whether you want to use Docker or Kubernetes, follow the instructions below.

Currently we support two ways of deploying Document Summarization services with docker compose:
Expand Down
2 changes: 1 addition & 1 deletion FaqGen/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ Our FAQ Generation Application leverages the power of large language models (LLM

## Deploy FAQ Generation Service

The FAQ Generation service can be deployed on either Intel Gaudi2 or Intel XEON Scalable Processors.
The FAQ Generation service can be deployed on either Intel Gaudi2 or Intel Xeon Scalable Processors.

### Deploy FAQ Generation on Gaudi

Expand Down
2 changes: 1 addition & 1 deletion SearchQnA/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ The workflow falls into the following architecture:

## Deploy SearchQnA Service

The SearchQnA service can be effortlessly deployed on either Intel Gaudi2 or Intel XEON Scalable Processors.
The SearchQnA service can be effortlessly deployed on either Intel Gaudi2 or Intel Xeon Scalable Processors.

Currently we support two ways of deploying SearchQnA services with docker compose:

Expand Down
4 changes: 2 additions & 2 deletions Translation/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,11 +6,11 @@ Translation architecture shows below:

![architecture](./assets/img/translation_architecture.png)

This Translation use case performs Language Translation Inference on Intel Gaudi2 or Intel XEON Scalable Processors. The Intel Gaudi2 accelerator supports both training and inference for deep learning models in particular for LLMs. Visit [Habana AI products](https://habana.ai/products) for more details.
This Translation use case performs Language Translation Inference on Intel Gaudi2 or Intel Xeon Scalable Processors. The Intel Gaudi2 accelerator supports both training and inference for deep learning models in particular for LLMs. Visit [Habana AI products](https://habana.ai/products) for more details.

## Deploy Translation Service

The Translation service can be effortlessly deployed on either Intel Gaudi2 or Intel XEON Scalable Processors.
The Translation service can be effortlessly deployed on either Intel Gaudi2 or Intel Xeon Scalable Processors.

### Deploy Translation on Gaudi

Expand Down
4 changes: 2 additions & 2 deletions VideoQnA/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -85,14 +85,14 @@ flowchart LR
DP <-.->|d|VDB
```

- This project implements a Retrieval-Augmented Generation (RAG) workflow using LangChain, Intel VDMS VectorDB, and Text Generation Inference, optimized for Intel XEON Scalable Processors.
- This project implements a Retrieval-Augmented Generation (RAG) workflow using LangChain, Intel VDMS VectorDB, and Text Generation Inference, optimized for Intel Xeon Scalable Processors.
- Video Processing: Videos are converted into feature vectors using mean aggregation and stored in the VDMS vector store.
- Query Handling: When a user submits a query, the system performs a similarity search in the vector store to retrieve the best-matching videos.
- Contextual Inference: The retrieved videos are then sent to the Large Vision Model (LVM) for inference, providing supplemental context for the query.

## Deploy VideoQnA Service

The VideoQnA service can be effortlessly deployed on Intel XEON Scalable Processors.
The VideoQnA service can be effortlessly deployed on Intel Xeon Scalable Processors.

### Required Models

Expand Down
2 changes: 1 addition & 1 deletion VisualQnA/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,7 @@ You can choose other llava-next models, such as `llava-hf/llava-v1.6-vicuna-13b-

## Deploy VisualQnA Service

The VisualQnA service can be effortlessly deployed on either Intel Gaudi2 or Intel XEON Scalable Processors.
The VisualQnA service can be effortlessly deployed on either Intel Gaudi2 or Intel Xeon Scalable Processors.

Currently we support deploying VisualQnA services with docker compose.

Expand Down

0 comments on commit e1b8ce0

Please sign in to comment.