-
Notifications
You must be signed in to change notification settings - Fork 192
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
doc: use markdown table in supported_examples (#707)
* doc: use markdown table in supported_examples replace raw html with markdown table syntax to fix yucky github.io rendering for raw html tables Signed-off-by: David B. Kinder <david.b.kinder@intel.com> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci --------- Signed-off-by: David B. Kinder <david.b.kinder@intel.com> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
- Loading branch information
1 parent
a8244c4
commit 9cf1d88
Showing
1 changed file
with
27 additions
and
180 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,220 +1,67 @@ | ||
<div align="center"> | ||
|
||
# Supported Examples | ||
|
||
<div align="left"> | ||
|
||
This document introduces the supported examples of GenAIExamples. The supported Vector Database, LLM models, serving frameworks and hardwares are listed as below. | ||
|
||
## ChatQnA | ||
|
||
[ChatQnA](./ChatQnA/README.md) is an example of chatbot for question and answering through retrieval augmented generation (RAG). | ||
|
||
<table> | ||
<tbody> | ||
<tr> | ||
<td>Framework</td> | ||
<td>LLM</td> | ||
<td>Embedding</td> | ||
<td>Vector Database</td> | ||
<td>Serving</td> | ||
<td>HW</td> | ||
<td>Description</td> | ||
</tr> | ||
<tr> | ||
<td><a href="https://www.langchain.com">LangChain</a>/<a href="https://www.llamaindex.ai">LlamaIndex</a></td> | ||
<td><a href="https://huggingface.co/Intel/neural-chat-7b-v3-3">NeuralChat-7B</a></td> | ||
<td><a href="https://huggingface.co/BAAI/bge-base-en">BGE-Base</a></td> | ||
<td><a href="https://redis.io/">Redis</a></td> | ||
<td><a href="https://github.com/huggingface/text-generation-inference">TGI</a> <a href="https://github.com/huggingface/text-embeddings-inference">TEI</a></td> | ||
<td>Xeon/Gaudi2/GPU</td> | ||
<td>Chatbot</td> | ||
</tr> | ||
<tr> | ||
<td><a href="https://www.langchain.com">LangChain</a>/<a href="https://www.llamaindex.ai">LlamaIndex</a></td> | ||
<td><a href="https://huggingface.co/Intel/neural-chat-7b-v3-3">NeuralChat-7B</a></td> | ||
<td><a href="https://huggingface.co/BAAI/bge-base-en">BGE-Base</a></td> | ||
<td><a href="https://www.trychroma.com/">Chroma</a></td> | ||
<td><a href="https://github.com/huggingface/text-generation-inference">TGI</a> <a href="https://github.com/huggingface/text-embeddings-inference">TEI</td> | ||
<td>Xeon/Gaudi2</td> | ||
<td>Chatbot</td> | ||
</tr> | ||
<tr> | ||
<td><a href="https://www.langchain.com">LangChain</a>/<a href="https://www.llamaindex.ai">LlamaIndex</a></td> | ||
<td><a href="https://huggingface.co/mistralai/Mistral-7B-v0.1">Mistral-7B</a></td> | ||
<td><a href="https://huggingface.co/BAAI/bge-base-en">BGE-Base</a></td> | ||
<td><a href="https://redis.io/">Redis</a></td> | ||
<td><a href="https://github.com/huggingface/text-generation-inference">TGI</a> <a href="https://github.com/huggingface/text-embeddings-inference">TEI</td> | ||
<td>Xeon/Gaudi2</td> | ||
<td>Chatbot</td> | ||
</tr> | ||
<tr> | ||
<td><a href="https://www.langchain.com">LangChain</a>/<a href="https://www.llamaindex.ai">LlamaIndex</a></td> | ||
<td><a href="https://huggingface.co/mistralai/Mistral-7B-v0.1">Mistral-7B</a></td> | ||
<td><a href="https://huggingface.co/BAAI/bge-base-en">BGE-Base</a></td> | ||
<td><a href="https://qdrant.tech/">Qdrant</a></td> | ||
<td><a href="https://github.com/huggingface/text-generation-inference">TGI</a> <a href="https://github.com/huggingface/text-embeddings-inference">TEI</td> | ||
<td>Xeon/Gaudi2</td> | ||
<td>Chatbot</td> | ||
</tr> | ||
<tr> | ||
<td><a href="https://www.langchain.com">LangChain</a>/<a href="https://www.llamaindex.ai">LlamaIndex</a></td> | ||
<td><a href="https://huggingface.co/Qwen/Qwen2-7B">Qwen2-7B</a></td> | ||
<td><a href="https://huggingface.co/BAAI/bge-base-en">BGE-Base</a></td> | ||
<td><a href="https://redis.io/">Redis</a></td> | ||
<td><a href=<a href="https://github.com/huggingface/text-embeddings-inference">TEI</td> | ||
<td>Xeon/Gaudi2</td> | ||
<td>Chatbot</td> | ||
</tr> | ||
</tbody> | ||
</table> | ||
| Framework | LLM | Embedding | Vector Database | Serving | HW | Description | | ||
| ------------------------------------------------------------------------------ | ----------------------------------------------------------------- | --------------------------------------------------- | ------------------------------------ | ------------------------------------------------------------------------------------------------------------------------------- | --------------- | ----------- | | ||
| [LangChain](https://www.langchain.com)/[LlamaIndex](https://www.llamaindex.ai) | [NeuralChat-7B](https://huggingface.co/Intel/neural-chat-7b-v3-3) | [BGE-Base](https://huggingface.co/BAAI/bge-base-en) | [Redis](https://redis.io/) | [TGI](https://github.com/huggingface/text-generation-inference) [TEI](https://github.com/huggingface/text-embeddings-inference) | Xeon/Gaudi2/GPU | Chatbot | | ||
| [LangChain](https://www.langchain.com)/[LlamaIndex](https://www.llamaindex.ai) | [NeuralChat-7B](https://huggingface.co/Intel/neural-chat-7b-v3-3) | [BGE-Base](https://huggingface.co/BAAI/bge-base-en) | [Chroma](https://www.trychroma.com/) | [TGI](https://github.com/huggingface/text-generation-inference) [TEI](https://github.com/huggingface/text-embeddings-inference) | Xeon/Gaudi2 | Chatbot | | ||
| [LangChain](https://www.langchain.com)/[LlamaIndex](https://www.llamaindex.ai) | [Mistral-7B](https://huggingface.co/mistralai/Mistral-7B-v0.1) | [BGE-Base](https://huggingface.co/BAAI/bge-base-en) | [Redis](https://redis.io/) | [TGI](https://github.com/huggingface/text-generation-inference) [TEI](https://github.com/huggingface/text-embeddings-inference) | Xeon/Gaudi2 | Chatbot | | ||
| [LangChain](https://www.langchain.com)/[LlamaIndex](https://www.llamaindex.ai) | [Mistral-7B](https://huggingface.co/mistralai/Mistral-7B-v0.1) | [BGE-Base](https://huggingface.co/BAAI/bge-base-en) | [Qdrant](https://qdrant.tech/) | [TGI](https://github.com/huggingface/text-generation-inference) [TEI](https://github.com/huggingface/text-embeddings-inference) | Xeon/Gaudi2 | Chatbot | | ||
| [LangChain](https://www.langchain.com)/[LlamaIndex](https://www.llamaindex.ai) | [Qwen2-7B](https://huggingface.co/Qwen/Qwen2-7B) | [BGE-Base](https://huggingface.co/BAAI/bge-base-en) | [Redis](https://redis.io/) | [TEI](https://github.com/huggingface/text-embeddings-inference) | Xeon/Gaudi2 | Chatbot | | ||
|
||
### CodeGen | ||
|
||
[CodeGen](./CodeGen/README.md) is an example of copilot designed for code generation in Visual Studio Code. | ||
|
||
<table> | ||
<tbody> | ||
<tr> | ||
<td>Framework</td> | ||
<td>LLM</td> | ||
<td>Serving</td> | ||
<td>HW</td> | ||
<td>Description</td> | ||
</tr> | ||
<tr> | ||
<td><a href="https://www.langchain.com">LangChain</a>/<a href="https://www.llamaindex.ai">LlamaIndex</a></td> | ||
<td><a href="https://huggingface.co/meta-llama/CodeLlama-7b-hf">meta-llama/CodeLlama-7b-hf</a></td> | ||
<td><a href="https://github.com/huggingface/text-generation-inference">TGI</a></td> | ||
<td>Xeon/Gaudi2</td> | ||
<td>Copilot</td> | ||
</tr> | ||
</tbody> | ||
</table> | ||
| Framework | LLM | Serving | HW | Description | | ||
| ------------------------------------------------------------------------------ | ------------------------------------------------------------------------------- | --------------------------------------------------------------- | ----------- | ----------- | | ||
| [LangChain](https://www.langchain.com)/[LlamaIndex](https://www.llamaindex.ai) | [meta-llama/CodeLlama-7b-hf](https://huggingface.co/meta-llama/CodeLlama-7b-hf) | [TGI](https://github.com/huggingface/text-generation-inference) | Xeon/Gaudi2 | Copilot | | ||
|
||
### CodeTrans | ||
|
||
[CodeTrans](./CodeTrans/README.md) is an example of chatbot for converting code written in one programming language to another programming language while maintaining the same functionality. | ||
|
||
<table> | ||
<tbody> | ||
<tr> | ||
<td>Framework</td> | ||
<td>LLM</td> | ||
<td>Serving</td> | ||
<td>HW</td> | ||
<td>Description</td> | ||
</tr> | ||
<tr> | ||
<td><a href="https://www.langchain.com">LangChain</a>/<a href="https://www.llamaindex.ai">LlamaIndex</a></td> | ||
<td><a href="https://huggingface.co/HuggingFaceH4/mistral-7b-grok">HuggingFaceH4/mistral-7b-grok</a></td> | ||
<td><a href="https://github.com/huggingface/text-generation-inference">TGI</a></td> | ||
<td>Xeon/Gaudi2</td> | ||
<td>Code Translation</td> | ||
</tr> | ||
</tbody> | ||
</table> | ||
| Framework | LLM | Serving | HW | Description | | ||
| ------------------------------------------------------------------------------ | ------------------------------------------------------------------------------------- | --------------------------------------------------------------- | ----------- | ---------------- | | ||
| [LangChain](https://www.langchain.com)/[LlamaIndex](https://www.llamaindex.ai) | [HuggingFaceH4/mistral-7b-grok](https://huggingface.co/HuggingFaceH4/mistral-7b-grok) | [TGI](https://github.com/huggingface/text-generation-inference) | Xeon/Gaudi2 | Code Translation | | ||
|
||
### DocSum | ||
|
||
[DocSum](./DocSum/README.md) is an example of chatbot for summarizing the content of documents or reports. | ||
|
||
<table> | ||
<tbody> | ||
<tr> | ||
<td>Framework</td> | ||
<td>LLM</td> | ||
<td>Serving</td> | ||
<td>HW</td> | ||
<td>Description</td> | ||
</tr> | ||
<tr> | ||
<td><a href="https://www.langchain.com">LangChain</a>/<a href="https://www.llamaindex.ai">LlamaIndex</a></td> | ||
<td><a href="https://huggingface.co/Intel/neural-chat-7b-v3-3">NeuralChat-7B</a></td> | ||
<td><a href="https://github.com/huggingface/text-generation-inference">TGI</a></td> | ||
<td>Xeon/Gaudi2</td> | ||
<td>Chatbot</td> | ||
</tr> | ||
<tr> | ||
<td><a href="https://www.langchain.com">LangChain</a>/<a href="https://www.llamaindex.ai">LlamaIndex</a></td> | ||
<td><a href="https://huggingface.co/mistralai/Mistral-7B-v0.1">Mistral-7B</a></td> | ||
<td><a href="https://github.com/huggingface/text-generation-inference">TGI</a></td> | ||
<td>Xeon/Gaudi2</td> | ||
<td>Chatbot</td> | ||
</tr> | ||
</tbody> | ||
</table> | ||
| Framework | LLM | Serving | HW | Description | | ||
| ------------------------------------------------------------------------------ | ----------------------------------------------------------------- | --------------------------------------------------------------- | ----------- | ----------- | | ||
| [LangChain](https://www.langchain.com)/[LlamaIndex](https://www.llamaindex.ai) | [NeuralChat-7B](https://huggingface.co/Intel/neural-chat-7b-v3-3) | [TGI](https://github.com/huggingface/text-generation-inference) | Xeon/Gaudi2 | Chatbot | | ||
| [LangChain](https://www.langchain.com)/[LlamaIndex](https://www.llamaindex.ai) | [Mistral-7B](https://huggingface.co/mistralai/Mistral-7B-v0.1) | [TGI](https://github.com/huggingface/text-generation-inference) | Xeon/Gaudi2 | Chatbot | | ||
|
||
### Language Translation | ||
|
||
[Language Translation](./Translation/README.md) is an example of chatbot for converting a source-language text to an equivalent target-language text. | ||
|
||
<table> | ||
<tbody> | ||
<tr> | ||
<td>Framework</td> | ||
<td>LLM</td> | ||
<td>Serving</td> | ||
<td>HW</td> | ||
<td>Description</td> | ||
</tr> | ||
<tr> | ||
<td><a href="https://www.langchain.com">LangChain</a>/<a href="https://www.llamaindex.ai">LlamaIndex</a></td> | ||
<td><a href="https://huggingface.co/haoranxu/ALMA-13B">haoranxu/ALMA-13B</a></td> | ||
<td><a href="https://github.com/huggingface/text-generation-inference">TGI</a></td> | ||
<td>Xeon/Gaudi2</td> | ||
<td>Language Translation</td> | ||
</tr> | ||
</tbody> | ||
</table> | ||
| Framework | LLM | Serving | HW | Description | | ||
| ------------------------------------------------------------------------------ | ------------------------------------------------------------- | --------------------------------------------------------------- | ----------- | -------------------- | | ||
| [LangChain](https://www.langchain.com)/[LlamaIndex](https://www.llamaindex.ai) | [haoranxu/ALMA-13B](https://huggingface.co/haoranxu/ALMA-13B) | [TGI](https://github.com/huggingface/text-generation-inference) | Xeon/Gaudi2 | Language Translation | | ||
|
||
### SearchQnA | ||
|
||
[SearchQnA](./SearchQnA/README.md) is an example of chatbot for using search engine to enhance QA quality. | ||
|
||
<table> | ||
<tbody> | ||
<tr> | ||
<td>Framework</td> | ||
<td>LLM</td> | ||
<td>Serving</td> | ||
<td>HW</td> | ||
<td>Description</td> | ||
</tr> | ||
<tr> | ||
<td><a href="https://www.langchain.com">LangChain</a>/<a href="https://www.llamaindex.ai">LlamaIndex</a></td> | ||
<td><a href="https://huggingface.co/Intel/neural-chat-7b-v3-3">NeuralChat-7B</a></td> | ||
<td><a href="https://github.com/huggingface/text-generation-inference">TGI</a></td> | ||
<td>Xeon/Gaudi2</td> | ||
<td>Chatbot</td> | ||
</tr> | ||
<tr> | ||
<td><a href="https://www.langchain.com">LangChain</a>/<a href="https://www.llamaindex.ai">LlamaIndex</a></td> | ||
<td><a href="https://huggingface.co/mistralai/Mistral-7B-v0.1">Mistral-7B</a></td> | ||
<td><a href="https://github.com/huggingface/text-generation-inference">TGI</a></td> | ||
<td>Xeon/Gaudi2</td> | ||
<td>Chatbot</td> | ||
</tr> | ||
</tbody> | ||
</table> | ||
| Framework | LLM | Serving | HW | Description | | ||
| ------------------------------------------------------------------------------ | ----------------------------------------------------------------- | --------------------------------------------------------------- | ----------- | ----------- | | ||
| [LangChain](https://www.langchain.com)/[LlamaIndex](https://www.llamaindex.ai) | [NeuralChat-7B](https://huggingface.co/Intel/neural-chat-7b-v3-3) | [TGI](https://github.com/huggingface/text-generation-inference) | Xeon/Gaudi2 | Chatbot | | ||
| [LangChain](https://www.langchain.com)/[LlamaIndex](https://www.llamaindex.ai) | [Mistral-7B](https://huggingface.co/mistralai/Mistral-7B-v0.1) | [TGI](https://github.com/huggingface/text-generation-inference) | Xeon/Gaudi2 | Chatbot | | ||
|
||
### VisualQnA | ||
|
||
[VisualQnA](./VisualQnA/README.md) is an example of chatbot for question and answering based on the images. | ||
|
||
<table> | ||
<tbody> | ||
<tr> | ||
<td>LLM</td> | ||
<td>HW</td> | ||
<td>Description</td> | ||
</tr> | ||
<tr> | ||
<td><a href="https://huggingface.co/llava-hf/llava-1.5-7b-hf">LLaVA-1.5-7B</a></td> | ||
<td>Gaudi2</td> | ||
<td>Chatbot</td> | ||
</tr> | ||
</tbody> | ||
</table> | ||
| LLM | HW | Description | | ||
| --------------------------------------------------------------- | ------ | ----------- | | ||
| [LLaVA-1.5-7B](https://huggingface.co/llava-hf/llava-1.5-7b-hf) | Gaudi2 | Chatbot | | ||
|
||
> **_NOTE:_** The `Language Translation`, `SearchQnA`, `VisualQnA` and other use cases not listing here are in active development. The code structure of these use cases are subject to change. |