Releases: BerriAI/litellm
v1.26.6
What's Changed
- fix(llm_guard.py): add streaming hook for moderation calls by @krrishdholakia in #2106
Full Changelog: v1.26.5...v1.26.6
v1.26.5
What's Changed
- fix(sagemaker.py): fix async sagemaker calls by @krrishdholakia in #2103
- fix(presidio_pii_masking.py): enable user to pass their own ad hoc recognizers to presidio by @krrishdholakia in #2100
- (docs) Router - use correct base model for cost by @ishaan-jaff in #2107
Full Changelog: v1.26.4...v1.26.5
v1.26.4
What's Changed
-
[FEAT] UI - View all users by @ishaan-jaff in #2101
-
[FEAT] UI - Create Users on LiteLLM UI by @ishaan-jaff in #2102
-
[Fix] Unexpected Model Deletion in POST /key/update When Updating Team ID by @ishaan-jaff in #2104
-
fix(gemini.py): fix async streaming + add native async completions by @krrishdholakia in #2090
Full Changelog: v1.26.3...v1.26.4
v1.26.3
What's Changed
- feat(llm_guard.py): support llm guard for content moderation by @krrishdholakia in #2087
Full Changelog: v1.26.2...v1.26.3
v1.26.2
What's Changed
- [FEAT] proxy backend - Save User Model Requests in DB by @ishaan-jaff in #2080
- [Feat] UI - allow a user to request access to a model by @ishaan-jaff in #2077
- [FEAT] Admin UI - Approve / Deny user mode requests by @ishaan-jaff in #2089
Full Changelog: v1.26.1...v1.26.2
v1.26.1
What's Changed
- [FEAT] /model/info show models user has access to by @ishaan-jaff in #2066
- [FEAT] UI Set tpm / rpm limits by @ishaan-jaff in #2073
Full Changelog: v1.26.0...v1.26.1
v1.26.0
What's Changed
- feat(llama_guard.py): allow user to define custom unsafe content categories by @krrishdholakia in #2045
- feat(google_text_moderation.py): allow user to use google text moderation for content mod on proxy by @krrishdholakia in #2046
- feat(proxy_server.py): return all teams, user is a member of in /user/info by @krrishdholakia in #2049
- fix(ci): platforms input by @adrien-f in #2060
- fix(ci): set up QEMU and Buildx for multi platform images by @adrien-f in #2065
New Contributors
Full Changelog: v1.25.2...v1.26.0
v1.25.2
Control your companies LLM spend with budgets, call 100+ LLMs
What's Changed
- [FEAT] Track spend per model (for Key, User and Team) by @ishaan-jaff in #2022
- Add safety_settings parameter to gemini generate_content calls by @afbarbaro in #2013
- [FIX] Spend Tracking bug for Keys made on Admin UI (when role = proxy_admin) by @ishaan-jaff in #2047
- feat(presidio_pii_masking.py): allow request level controls for turning on/off pii masking by @krrishdholakia in #2037
- [FEAT] UI- reload key spend info by @ishaan-jaff in https://github.com/BerriAI/lite
llm/pull/2048 - [FEAT] Budgets per Model (for a key) by @ishaan-jaff in #2029
New Contributors
- @afbarbaro made their first contribution in #2013
Full Changelog: v1.25.1...v1.25.2
v1.25.1
What's Changed
-
nit: added urls to pyproject.toml by @ErikBjare in #1536
-
[FIX] UI - fix method not allowed error by @ishaan-jaff in #2041
-
[FEAT] Set FastAPI root path by @ishaan-jaff in #2039
Important
In a Kubernetes deployment, it's possible to utilize a shared DNS to host multiple applications by modifying the virtual service
👉 Set SERVER_ROOT_PATH
in your .env and this will be set as your server root path
Docs here: https://docs.litellm.ai/docs/proxy/deploy#advanced-deployment-settings
Full Changelog: v1.25.0...v1.25.1
v1.25.0
What's Changed
- feat(llama_guard.py): add llama guard support for content moderation + new
async_moderation_hook
endpoint by @krrishdholakia in #2031 - fix for importllib compatibility issue for python 3.8 by @sorokine in #2017
Full Changelog: v1.24.6...v1.25.0