v1.55.0
·
3 commits
to aa7f416b7f8ac2c0303dfb5963c04e9811bf003b
since this release
What's Changed
- Litellm code qa common config by @krrishdholakia in #7113
- (Refactor) Code Quality improvement - use Common base handler for Cohere by @ishaan-jaff in #7117
- (Refactor) Code Quality improvement - Use Common base handler for
clarifai/
by @ishaan-jaff in #7125 - (Refactor) Code Quality improvement - Use Common base handler for
cloudflare/
provider by @ishaan-jaff in #7127 - (Refactor) Code Quality improvement - Use Common base handler for Cohere /generate API by @ishaan-jaff in #7122
- (Refactor) Code Quality improvement - Use Common base handler for
anthropic_text/
by @ishaan-jaff in #7143 - docs: document code quality by @krrishdholakia in #7149
- (Refactor) Code Quality improvement - stop redefining LiteLLMBase by @ishaan-jaff in #7147
- LiteLLM Common Base LLM Config (pt.2) by @krrishdholakia in #7146
- LiteLLM Common Base LLM Config (pt.3): Move all OAI compatible providers to base llm config by @krrishdholakia in #7148
- refactor(sagemaker/): separate chat + completion routes + make them b… by @krrishdholakia in #7151
- rename
llms/OpenAI/
->llms/openai/
by @ishaan-jaff in #7154 - Code Quality improvement - remove symlink to
requirements.txt
from within litellm by @ishaan-jaff in #7155 - LiteLLM Common Base LLM Config (pt.4): Move Ollama to Base LLM Config by @krrishdholakia in #7157
- Code Quality Improvement - remove
file_apis
,fine_tuning_apis
from/llms
by @ishaan-jaff in #7156 - Revert "LiteLLM Common Base LLM Config (pt.4): Move Ollama to Base LLM Config" by @krrishdholakia in #7160
- Litellm ollama refactor by @krrishdholakia in #7162
- Litellm vllm refactor by @krrishdholakia in #7158
- Litellm merge pr by @krrishdholakia in #7161
- Code Quality Improvement - remove
tokenizers/
from /llms by @ishaan-jaff in #7163 - build(deps): bump nanoid from 3.3.7 to 3.3.8 in /docs/my-website by @dependabot in #7159
- (Refactor) Code Quality improvement - remove
/prompt_templates/
,base_aws_llm.py
from/llms
folder by @ishaan-jaff in #7164 - Code Quality Improvement - use
vertex_ai/
as folder name for vertexAI by @ishaan-jaff in #7166 - Code Quality Improvement - move
aleph_alpha
to deprecated_providers by @ishaan-jaff in #7168 - (Refactor) Code Quality improvement - rename
text_completion_codestral.py
->codestral/completion/
by @ishaan-jaff in #7172 - (Code Quality) - Add test to enforce all folders in
/llms
are a litellm provider by @ishaan-jaff in #7175 - fix(get_supported_openai_params.py): cleanup by @krrishdholakia in #7176
- fix(acompletion): support fallbacks on acompletion by @krrishdholakia in #7184
Full Changelog: v1.54.1...v1.55.0
Docker Run LiteLLM Proxy
docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.55.0
Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat
Load Test LiteLLM Proxy Results
Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
---|---|---|---|---|---|---|---|---|---|
/chat/completions | Passed ✅ | 250.0 | 286.19507948581224 | 5.886697197840291 | 0.0033409178194326278 | 1762 | 1 | 211.68456200001629 | 3578.4067740000296 |
Aggregated | Passed ✅ | 250.0 | 286.19507948581224 | 5.886697197840291 | 0.0033409178194326278 | 1762 | 1 | 211.68456200001629 | 3578.4067740000296 |