diff --git a/README.md b/README.md index d1112b3137..b846204aa2 100644 --- a/README.md +++ b/README.md @@ -26,20 +26,20 @@ potential of cutting-edge AI models. ## 🔥 Hot Topics ### Framework Enhancements +- Support speech recognition model: [#929](https://github.com/xorbitsai/inference/pull/929) - Metrics support: [#906](https://github.com/xorbitsai/inference/pull/906) - Docker image: [#855](https://github.com/xorbitsai/inference/pull/855) - Support multimodal: [#829](https://github.com/xorbitsai/inference/pull/829) - Auto recover: [#694](https://github.com/xorbitsai/inference/pull/694) - Function calling API: [#701](https://github.com/xorbitsai/inference/pull/701), here's example: https://github.com/xorbitsai/inference/blob/main/examples/FunctionCall.ipynb - Support rerank model: [#672](https://github.com/xorbitsai/inference/pull/672) -- Speculative decoding: [#509](https://github.com/xorbitsai/inference/pull/509) ### New Models +- Built-in support for [Whisper](https://github.com/openai/whisper): [#929](https://github.com/xorbitsai/inference/pull/929) +- Built-in support for [Orion-chat](https://huggingface.co/OrionStarAI): [#933](https://github.com/xorbitsai/inference/pull/933) - Built-in support for [InternLM2-chat](https://huggingface.co/internlm/internlm2-chat-7b): [#829](https://github.com/xorbitsai/inference/pull/913) - Built-in support for [qwen-vl](https://huggingface.co/Qwen/Qwen-VL-Chat): [#829](https://github.com/xorbitsai/inference/pull/829) - Built-in support for [phi-2](https://huggingface.co/microsoft/phi-2): [#828](https://github.com/xorbitsai/inference/pull/828) - Built-in support for [mistral-instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2): [#796](https://github.com/xorbitsai/inference/pull/796) -- Built-in support for [deepseek-llm](https://huggingface.co/deepseek-ai) and [deepseek-coder](https://huggingface.co/deepseek-ai): [#786](https://github.com/xorbitsai/inference/pull/786) -- Built-in support for [Mixtral-8x7B-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1): [#782](https://github.com/xorbitsai/inference/pull/782) ### Integrations - [Dify](https://docs.dify.ai/advanced/model-configuration/xinference): an LLMOps platform that enables developers (and even non-developers) to quickly build useful applications based on large language models, ensuring they are visual, operable, and improvable. - [Chatbox](https://chatboxai.app/): a desktop client for multiple cutting-edge LLM models, available on Windows, Mac and Linux. @@ -68,16 +68,17 @@ allowing the seamless distribution of model inference across multiple devices or with popular third-party libraries including [LangChain](https://python.langchain.com/docs/integrations/providers/xinference), [LlamaIndex](https://gpt-index.readthedocs.io/en/stable/examples/llm/XinferenceLocalDeployment.html#i-run-pip-install-xinference-all-in-a-terminal-window), [Dify](https://docs.dify.ai/advanced/model-configuration/xinference), and [Chatbox](https://chatboxai.app/). ## Why Xinference -| Feature | Xinference | FastChat | OpenLLM | RayLLM | -|---------|------------|----------|---------|--------| -| OpenAI-Compatible RESTful API | ✅ | ✅ | ✅ | ✅ | -| vLLM Integrations | ✅ | ✅ | ✅ | ✅ | -| More Inference Engines (GGML, TensorRT) | ✅ | ❌ | ✅ | ✅ | -| More Platforms (CPU, Metal) | ✅ | ✅ | ❌ | ❌ | -| Multi-node Cluster Deployment | ✅ | ❌ | ❌ | ✅ | -| Image Models (Text-to-Image) | ✅ | ✅ | ❌ | ❌ | -| Text Embedding Models | ✅ | ❌ | ❌ | ❌ | -| Multimodal Models | ✅ | ❌ | ❌ | ❌ | +| Feature | Xinference | FastChat | OpenLLM | RayLLM | +|------------------------------------------------|------------|----------|---------|--------| +| OpenAI-Compatible RESTful API | ✅ | ✅ | ✅ | ✅ | +| vLLM Integrations | ✅ | ✅ | ✅ | ✅ | +| More Inference Engines (GGML, TensorRT) | ✅ | ❌ | ✅ | ✅ | +| More Platforms (CPU, Metal) | ✅ | ✅ | ❌ | ❌ | +| Multi-node Cluster Deployment | ✅ | ❌ | ❌ | ✅ | +| Image Models (Text-to-Image) | ✅ | ✅ | ❌ | ❌ | +| Text Embedding Models | ✅ | ❌ | ❌ | ❌ | +| Multimodal Models | ✅ | ❌ | ❌ | ❌ | +| Audio Models | ✅ | ❌ | ❌ | ❌ | | More OpenAI Functionalities (Function Calling) | ✅ | ❌ | ❌ | ❌ | ## Getting Started diff --git a/README_zh_CN.md b/README_zh_CN.md index a611da90f4..af2b42ee35 100644 --- a/README_zh_CN.md +++ b/README_zh_CN.md @@ -23,20 +23,20 @@ Xorbits Inference(Xinference)是一个性能强大且功能全面的分布 ## 🔥 近期热点 ### 框架增强 +- 支持语音识别模型: [#929](https://github.com/xorbitsai/inference/pull/929) - 增加 Metrics 统计信息: [#906](https://github.com/xorbitsai/inference/pull/906) - Docker 镜像支持: [#855](https://github.com/xorbitsai/inference/pull/855) - 支持多模态模型:[#829](https://github.com/xorbitsai/inference/pull/829) - 模型自动恢复: [#694](https://github.com/xorbitsai/inference/pull/694) - 函数调用接口: [#701](https://github.com/xorbitsai/inference/pull/701),示例代码:https://github.com/xorbitsai/inference/blob/main/examples/FunctionCall.ipynb - 支持 rerank 模型: [#672](https://github.com/xorbitsai/inference/pull/672) -- 支持指定 grammar 输出: [#525](https://github.com/xorbitsai/inference/pull/525) ### 新模型 +- 内置 [Whisper](https://github.com/openai/whisper): [#929](https://github.com/xorbitsai/inference/pull/929) +- 内置 [Orion-chat](https://huggingface.co/OrionStarAI): [#933](https://github.com/xorbitsai/inference/pull/933) - 内置 [InternLM2-chat](https://huggingface.co/internlm/internlm2-chat-7b): [#829](https://github.com/xorbitsai/inference/pull/913) - 内置 [qwen-vl](https://huggingface.co/Qwen/Qwen-VL-Chat): [#829](https://github.com/xorbitsai/inference/pull/829) - 内置 [phi-2](https://huggingface.co/microsoft/phi-2): [#828](https://github.com/xorbitsai/inference/pull/828) - 内置 [mistral-instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2): [#796](https://github.com/xorbitsai/inference/pull/796) -- 内置 [deepseek-llm](https://huggingface.co/deepseek-ai) 与 [deepseek-coder](https://huggingface.co/deepseek-ai): [#786](https://github.com/xorbitsai/inference/pull/786) -- 内置 [Mixtral-8x7B-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1): [#782](https://github.com/xorbitsai/inference/pull/782) ### 集成 - [Dify](https://docs.dify.ai/advanced/model-configuration/xinference): 一个涵盖了大型语言模型开发、部署、维护和优化的 LLMOps 平台。 - [Chatbox](https://chatboxai.app/): 一个支持前沿大语言模型的桌面客户端,支持 Windows,Mac,以及 Linux。 @@ -55,17 +55,18 @@ Xorbits Inference(Xinference)是一个性能强大且功能全面的分布 🔌 **开放生态,无缝对接**: 与流行的三方库无缝对接,包括 [LangChain](https://python.langchain.com/docs/integrations/providers/xinference),[LlamaIndex](https://gpt-index.readthedocs.io/en/stable/examples/llm/XinferenceLocalDeployment.html#i-run-pip-install-xinference-all-in-a-terminal-window),[Dify](https://docs.dify.ai/advanced/model-configuration/xinference),以及 [Chatbox](https://chatboxai.app/)。 ## 为什么选择 Xinference -| 功能特点 | Xinference | FastChat | OpenLLM | RayLLM | -|---------|------------|----------|---------|--------| +| 功能特点 | Xinference | FastChat | OpenLLM | RayLLM | +|-------------------------|------------|----------|---------|--------| | 兼容 OpenAI 的 RESTful API | ✅ | ✅ | ✅ | ✅ | -| vLLM 集成 | ✅ | ✅ | ✅ | ✅ | -| 更多推理引擎(GGML、TensorRT) | ✅ | ❌ | ✅ | ✅ | -| 更多平台支持(CPU、Metal) | ✅ | ✅ | ❌ | ❌ | -| 分布式集群部署 | ✅ | ❌ | ❌ | ✅ | -| 图像模型(文生图) | ✅ | ✅ | ❌ | ❌ | -| 文本嵌入模型 | ✅ | ❌ | ❌ | ❌ | -| 多模态模型 | ✅ | ❌ | ❌ | ❌ | -| 更多 OpenAI 功能 (函数调用) | ✅ | ❌ | ❌ | ❌ | +| vLLM 集成 | ✅ | ✅ | ✅ | ✅ | +| 更多推理引擎(GGML、TensorRT) | ✅ | ❌ | ✅ | ✅ | +| 更多平台支持(CPU、Metal) | ✅ | ✅ | ❌ | ❌ | +| 分布式集群部署 | ✅ | ❌ | ❌ | ✅ | +| 图像模型(文生图) | ✅ | ✅ | ❌ | ❌ | +| 文本嵌入模型 | ✅ | ❌ | ❌ | ❌ | +| 多模态模型 | ✅ | ❌ | ❌ | ❌ | +| 语音识别模型 | ✅ | ❌ | ❌ | ❌ | +| 更多 OpenAI 功能 (函数调用) | ✅ | ❌ | ❌ | ❌ | ## 入门指南