Releases: xorbitsai/inference
Releases · xorbitsai/inference
v0.9.3
What's new in 0.9.3 (2024-03-15)
These are the changes in inference v0.9.3.
New features
- FEAT: Add Yi-9B by @mujin2 in #1117
- FEAT: Provided the function of generate image. by @hainaweiben in #1047
Enhancements
- ENH: update cmd help info by @luweizheng in #1106
- ENH: Remove quantization limits for Apple METAL device when running model via
llama-cpp-python
by @ChengjieLi28 in #1134 - ENH: Make GET /v1/models compatible with OpenAI API. by @notsyncing in #1127
- ENH: support vllm>=0.3.1 by @qinxuye in #1145
Bug fixes
- BUG: fix the useless fstring. by @mikeshi80 in #1130
- BUG: Fixing the issue of model list loading failure caused by a large number of invalid requests on the model list page. by @wertycn in #1111
- BUG: Fix cache status for embedding, rerank and image models on the web UI by @ChengjieLi28 in #1135
- BUG: Fix missing information for
xinference registrations
andxinference list
command by @ChengjieLi28 in #1140 - BUG: Fix cannot continue to chat after canceling the streaming chat via
ctrl+c
by @ChengjieLi28 in #1144
Tests
- TST: Remove testing LLM model creating embedding by @ChengjieLi28 in #1121
Documentation
New Contributors
- @luweizheng made their first contribution in #1106
- @mujin2 made their first contribution in #1117
- @wertycn made their first contribution in #1111
Full Changelog: v0.9.2...v0.9.3
v0.9.2
What's new in 0.9.2 (2024-03-08)
These are the changes in inference v0.9.2.
New features
- FEAT: Add a command / SDK interface to query which models are able to… by @hainaweiben in #1076
- FEAT: add a docker-compose-distributed example with multiple workers by @bufferoverflow in #1064
- FEAT: Support download and merge multiple parts of gguf files by @notsyncing in #1075
- FEAT: Supports LoRA for LLM and image models by @ChengjieLi28 in #1080
Enhancements
- ENH: Supports
n_gpu_layers
parameter forllama-cpp-python
by @ChengjieLi28 in #1070 - ENH: Add a dropdown to the web UI to support adjusting GPU offload layers for llama.cpp loader by @notsyncing in #1073
- ENH: [UI] Show
replica
on running model page by @ChengjieLi28 in #1093 - ENH: Add "[DONE]" to the end of stream generation for better openai SDK compatibility by @ZhangTianrong in #1062
- ENH: [UI] Support setting
CPU
when selecting n_gpu by @ChengjieLi28 in #1096
Documentation
- DOC: Extra parameters for launching models by @aresnow1 in #1077
- DOC: contribution doc by @Ago327 in #1092
- DOC: doc for lora by @ChengjieLi28 in #1103
Others
- Update llm_family.json to correct the context length of glaive coder by @mikeshi80 in #1083
New Contributors
- @mikeshi80 made their first contribution in #1083
- @bufferoverflow made their first contribution in #1064
- @Ago327 made their first contribution in #1092
Full Changelog: v0.9.1...v0.9.2
v0.9.1
What's new in 0.9.1 (2024-03-01)
These are the changes in inference v0.9.1.
New features
- FEAT: Docker for cpu only by @ChengjieLi28 in #1068
Enhancements
- ENH: Support downloading gemma from modelscope by @aresnow1 in #1035
- ENH: [UI] Setting
quantization
when registering LLM by @ChengjieLi28 in #1040 - ENH: Restful client supports multiple system prompts for chat by @ChengjieLi28 in #1056
- ENH: supports disabling worker reporting status by @ChengjieLi28 in #1057
- ENH: Extra params for
xinference launch
command line by @ChengjieLi28 in #1048
Bug fixes
- BUG: Fix some models that cannot download from
modelscope
by @ChengjieLi28 in #1066 - BUG: Fix early truncation due to
max_token
being default to16
instead of1024
by @ZhangTianrong in #1061
Documentation
- DOC: Update readme by @qinxuye in #1045
- DOC: Fix readme by @qinxuye in #1054
- DOC: Fix wechat links by @qinxuye in #1055
New Contributors
- @ZhangTianrong made their first contribution in #1061
Full Changelog: v0.9.0...v0.9.1
v0.9.0
What's new in 0.9.0 (2024-02-22)
These are the changes in inference v0.9.0.
New features
- FEAT: Refactor device related code and add initial Intel GPU support by @notsyncing in #968
- FEAT: Support gemma series model by @aresnow1 in #1024
Enhancements
- ENH: [UI] Supports
replica
when launching LLM models by @ChengjieLi28 in #1011 - ENH: [UI] Show cluster resource information by @ChengjieLi28 in #1015
Bug fixes
- BUG: fix chat completion error when indexing body.messages by @fffonion in #1008
- BUG: Fix cache sd 1.5 error by @codingl2k1 in #1013
- BUG: fix typo in modelscope llama-2-13b-chat-GGUF by @qinxuye in #1026
- BUG: Fix missing qwen 1.5 7b gguf by @codingl2k1 in #1027
Documentation
- DOC: Polish model operation command doc by @onesuper in #1000
- DOC: Fix note on secret_key generation and algorithm selection for OAuth2 by @ChengjieLi28 in #1012
New Contributors
- @fffonion made their first contribution in #1008
- @notsyncing made their first contribution in #968
Full Changelog: v0.8.5...v0.9.0
v0.8.5
What's new in 0.8.5 (2024-02-06)
These are the changes in inference v0.8.5.
New features
- FEAT: Implemented web UI for launching the text2image model. by @hainaweiben in #985
- FEAT: Support qwen-1.5 series by @aresnow1 in #994
Enhancements
- ENH: Download stable diffusion model from modelscope by @codingl2k1 in #980
- REF: Supports
pydantic
v2 by @ChengjieLi28 in #983
Bug fixes
- BUG: Fix load yi vl model to multiple cards by @codingl2k1 in #992
- BUG: client compatible with old version of xinference by @ChengjieLi28 in #987
Others
New Contributors
- @hainaweiben made their first contribution in #985
Full Changelog: v0.8.4...v0.8.5
v0.8.4
What's new in 0.8.4 (2024-02-04)
These are the changes in inference v0.8.4.
Enhancements
- ENH: [UI] Fix too long LLM model name by @ChengjieLi28 in #979
- ENH: Add gguf models of llama-2-chat by @aresnow1 in #981
Bug fixes
- BUG: Fix custom model tool calls by @codingl2k1 in #978
- BUG: Fix chat template by @aresnow1 in #977
Documentation
- DOC: Translate model docs by @onesuper in #965
- DOC: Auto gen metrics doc by @codingl2k1 in #967
- DOC: Update README.md by @codingl2k1 in #969
Full Changelog: v0.8.3.1...v0.8.4
v0.8.3.1
What's new in 0.8.3.1 (2024-02-02)
These are the changes in inference v0.8.3.1.
Bug fixes
- BUG: Remove flash-attn dependency by @codingl2k1 in #970
Full Changelog: v0.8.3...v0.8.3.1
v0.8.3
What's new in 0.8.3 (2024-02-02)
These are the changes in inference v0.8.3.
New features
- FEAT: add whisper.small and belle distilwhisper model, fix parameter in rerank by @zhanghx0905 in #944
- FEAT: Support jina-embeddings-v2-base-zh by @aresnow1 in #948
- FEAT: Support Yi VL by @codingl2k1 in #946
- FEAT: Support more embedding and rerank models by @aresnow1 in #959
Enhancements
- ENH: Record gpu mem status in workers by @ChengjieLi28 in #941
- ENH: Allow chat max_tokens is None by @codingl2k1 in #960
- ENH:
chatglm
ggml
format supportssystem_prompt
by @ChengjieLi28 in #962
Bug fixes
- BUG: Fix roles in chat UI by @aresnow1 in #949
- BUG: Fix heartbeat by @codingl2k1 in #957
- BUG: Fix model's content length by @aresnow1 in #955
Documentation
- DOC: Update readme by @aresnow1 in #938
- DOC: Add image model doc by @codingl2k1 in #947
- DOC: Add audio model doc by @codingl2k1 in #954
- DOC: Reorge model related docs by @onesuper in #961
New Contributors
- @zhanghx0905 made their first contribution in #944
Full Changelog: v0.8.2...v0.8.3
v0.8.2
What's new in 0.8.2 (2024-01-26)
These are the changes in inference v0.8.2.
New features
- FEAT: Support events by @codingl2k1 in #916
- FEAT: Support audio model by @codingl2k1 in #929
- FEAT: Support orion series models by @aresnow1 in #933
- Feat: Support Mixtral-8x7B-Instruct-v0.1-AWQ by @aresnow1 in #936
Enhancements
- ENH: Launch model by
version
by @ChengjieLi28 in #896 - ENH: Move multimodal to LLM by @codingl2k1 in #917
- ENH: InternLM2 chat template by @aresnow1 in #919
- ENH: Support
use_fp16
for rerank model by @aresnow1 in #927 - ENH: record instance count and version count when detailed listing model registrations by @ChengjieLi28 in #920
- BLD: Resolve conflicts during installation by @aresnow1 in #924
- REF: Move auth code to service for better scalability by @ChengjieLi28 in #925
Documentation
- DOC: Update readme by @aresnow1 in #914
- DOC: Display contributors in readme by @onesuper in #915
- DOC: Merge multimodal to LLM by @codingl2k1 in #923
- DOC: Model usage guide by @onesuper in #926
- DOC: Audio doc by @codingl2k1 in #937
Full Changelog: v0.8.1...v0.8.2
v0.8.1
What's new in 0.8.1 (2024-01-19)
These are the changes in inference v0.8.1.
New features
- FEAT: Auto recover limit by @codingl2k1 in #893
- FEAT: Prometheus metrics exporter by @codingl2k1 in #906
- FEAT: Add internlm2-chat support by @aresnow1 in #913
Enhancements
- ENH: Launch model asynchronously by @ChengjieLi28 in #879
- ENH: qwen vl modelscope by @codingl2k1 in #902
- ENH: Add "tools" in model ability by @aresnow1 in #904
- ENH: Add quantization support for qwen chat by @aresnow1 in #910
Bug fixes
- BUG: Fix prompt template of chatglm3-32k by @aresnow1 in #889
- BUG: invalid volumn in docker compose yml by @ChengjieLi28 in #890
- BUG: Revert #883 by @aresnow1 in #903
- BUG: Fix chatglm backend by @codingl2k1 in #898
- BUG: Fix tool calls on custom model by @codingl2k1 in #899
- BUG: Fix is_valid_model_name by @aresnow1 in #907
Documentation
- DOC: Update the documentation about use of docker by @aresnow1 in #901
- DOC:ADD FAQ IN troubleshooting.rst by @sisuad in #911
New Contributors
Full Changelog: v0.8.0...v0.8.1