Releases: BerriAI/litellm
Releases · BerriAI/litellm
v1.20.8
What's Changed
- [Feat] Add Admin UI on Proxy Server (Static Web App) by @ishaan-jaff in #1726
- [Fix-UI] If user is already logged in using SSO, set allow_user_auth: True by @ishaan-jaff in #1728
- feat: add langfuse cost tracking by @maxdeichmann in #1704
- [Fix] UI - Use jwts by @ishaan-jaff in #1730
- fix(utils.py): support checking if user defined max tokens exceeds model limit by @krrishdholakia in #1729
Full Changelog: v1.20.7...v1.20.8
v1.20.7
What's Changed
- [Feat-UI] Make SSO Optional by @ishaan-jaff in #1697
- fix(proxy_server.py): speed up proxy startup time by @krrishdholakia in #1696
- feat(proxy_server.py): enable cache controls per key + no-store cache flag by @krrishdholakia in #1700
- fix(proxy_server.py): don't log sk-.. as part of logging object by @krrishdholakia in #1720
- [Feat] Langfuse log embeddings by @ishaan-jaff in #1722
Full Changelog: v1.20.6...v1.20.7
v1.20.6
What's Changed
- [Fix] Graceful rejection of token input for AWS Embeddings API by @ishaan-jaff in #1685
- fix(main.py): register both model name and model name with provider by @krrishdholakia in #1690
- [Feat] LiteLLM Proxy: MSFT SSO Login by @ishaan-jaff in #1691
- Fixes for model cost check and streaming by @krrishdholakia in #1693
[Feat] Set OpenAI organization for litellm.completion, Proxy Config by @ishaan-jaff in #1689
Full Changelog: v1.20.5...v1.20.6
v1.20.5
What's Changed
- [Feat-UI] Add form for /key/gen params by @ishaan-jaff in #1678
- build(proxy_cli.py): make running gunicorn an optional cli arg by @krrishdholakia in #1675
- Change quota project to the correct project being used for the call by @eslamkarim in #1657
New Contributors
- @eslamkarim made their first contribution in #1657
Full Changelog: v1.20.3...v1.20.5
v1.20.3
What's Changed
- Fix mistral's prompt template by @sa- in #1661
- [UI] Improve LiteLLM admin UI by @ishaan-jaff in #1669
- [Feat] return litellm version in health/readiness by @ishaan-jaff in #1673
- [Fix]Litellm patch dynamoDB by @ishaan-jaff in #1676
New Contributors
Full Changelog: v1.20.2...v1.20.3
v1.20.2
LiteLLM Proxy UI 2.0 Launch
- Team members Create, delete proxy API Keys
- Team members sign in using Google SSO
Doc on Setting Up Admin UI: https://docs.litellm.ai/docs/proxy/ui
Link to Admin UI: https://litellm-dashboard.vercel.app/
UI Demo
- feat(main.py): support auto-infering mode if not set by @krrishdholakia in #1652
- build(ui/litellm-dashboard): initial commit of litellm dashboard by @krrishdholakia in #1649
Full Changelog: v1.20.1...v1.20.2
v1.20.1
What's Changed
[Feat] add google login to litellm proxy by @ishaan-jaff in #1650
[BETA] SSO Google Login to your Proxy through LiteLLM UI, View, Manage Proxy API Keys
- Allow optional usage of the tls encryption for SMTP by @scampion in #1648
Full Changelog: v1.20.0...v1.20.1
v1.20.0
What's Changed
- feat(proxy_server.py): save abbreviated key name if
allow_user_auth
enabled by @krrishdholakia in #1642 - Litellm image gen cost tracking proxy by @krrishdholakia in #1646
Full Changelog: v1.19.6...v1.20.0
v1.19.6
What's Changed
- fix(utils.py): fix sagemaker cost tracking for streaming by @krrishdholakia in #1618
- [FIX] print verbose take only one argument by @scampion in #1630
- [Fix] SpendLogs stop logging model params by @ishaan-jaff in #1634
- [Feat] support
dimensions
OpenAI embedding param by @ishaan-jaff in #1635 - [FIX] Fixes Bug where Keys could cross their Key max_budget by @ishaan-jaff in #1640
- [Feat] Improve alert formats for Key budgets by @ishaan-jaff in #1644
- feat(utils.py): support region based pricing for bedrock + use bedrock's token counts if given by @krrishdholakia in #1641
New Contributors
Full Changelog: v1.19.4...v1.19.6
v1.19.4
What's Changed
New OpenAI Models
text-embedding-3-large
,
text-embedding-3-small
gpt-4-0125-preview
Usage - New Embedding models
response = litellm.embedding(
model="text-embedding-3-large",
input=["good morning from litellm", "this is another item"],
metadata={"anything": "good day"},
)
response = litellm.embedding(
model="text-embedding-3-small",
input=["good morning from litellm", "this is another item"],
metadata={"anything": "good day"},
)
Usage - gpt-4-0125-preview
response = litellm.completion(
model="gpt-4-0125-preview",
messages=messages,
max_tokens=10,
)
- fix(main.py): allow vertex ai project and location to be set in completion() call by @krrishdholakia in #1623
- [UI] Admin UI improvements by @ishaan-jaff in #1625
Full Changelog: v1.19.3...v1.19.4