Skip to content

Releases: BerriAI/litellm

v1.16.6

08 Jan 07:05
442ebdd
Compare
Choose a tag to compare

Full Changelog: v1.16.18...v1.16.6

v1.16.16

06 Jan 17:50
Compare
Choose a tag to compare

What's Changed

Full Changelog: v1.16.15...v1.16.16

v1.16.15

06 Jan 11:25
Compare
Choose a tag to compare

litellm 1.16.5

What's Changed

async def _test():
      response = await litellm.aembedding(
          model="azure/azure-embedding-model",
          input=["good morning from litellm", "gm"],
      )
  
      print(response)
  
      return response
  
  response = asyncio.run(_test())
  
  cost = litellm.completion_cost(completion_response=response)
  • litellm.completion_cost() raises exceptions (instead of swallowing exceptions) @jeromeroussin
  • Improved token counting for azure streaming responses @langgg0511 #1304
  • set os.environ/ variables for litellm proxy cache @Manouchehri
model_list:
  - model_name: gpt-3.5-turbo
    litellm_params:
      model: gpt-3.5-turbo
  - model_name: text-embedding-ada-002
    litellm_params:
      model: text-embedding-ada-002

litellm_settings:
  set_verbose: True
  cache: True          # set cache responses to True
  cache_params:        # set cache params for s3
    type: s3
    s3_bucket_name: cache-bucket-litellm   # AWS Bucket Name for S3
    s3_region_name: us-west-2              # AWS Region Name for S3
    s3_aws_access_key_id: os.environ/AWS_ACCESS_KEY_ID  # us os.environ/<variable name> to pass environment variables. This is AWS Access Key ID for S3
    s3_aws_secret_access_key: os.environ/AWS_SECRET_ACCESS_KEY  # AWS Secret Access Key for S3

Full Changelog: 1.16.14...v1.16.15

1.16.14

06 Jan 09:42
Compare
Choose a tag to compare

Full Changelog: v1.16.13...1.16.14

1.16.12

04 Jan 08:36
540dc8e
Compare
Choose a tag to compare

New providers

Xinference Embeddings: https://docs.litellm.ai/docs/providers/xinference
Voyage AI: https://docs.litellm.ai/docs/providers/voyage
Cloudflare AI workers: https://docs.litellm.ai/docs/providers/cloudflare_workers

Fixes:

AWS region name error when passing user bedrock client: #1292
Azure OpenAI models - use correct context window in model_context_window_and_prices.json
Fixes for Azure OpenAI + Streaming - counting prompt tokens correctly: #1264

What's Changed

New Contributors

Full Changelog: v1.15.0...1.16.12

v1.15.0

16 Dec 13:56
Compare
Choose a tag to compare

What's Changed

LiteLLM Proxy now maps exceptions for 100+ LLMs to the OpenAI format https://docs.litellm.ai/docs/proxy/quick_start
🧨 Log all LLM Input/Output to @dynamodb set litellm.success_callback = ["dynamodb"] https://docs.litellm.ai/docs/proxy/logging#logging-proxy-inputoutput---dynamodb
⭐️ Support for @MistralAI API, Gemini PRO
🔎 Set Aliases for model groups on LiteLLM Proxy
🔎 Exception mapping for openai.NotFoundError live now + testing for exception mapping on proxy added to LiteLLM ci/cd https://docs.litellm.ai/docs/exception_mapping
⚙️ Fixes for async + streaming caching https://docs.litellm.ai/docs/proxy/caching
👉 Support for using Async logging with @langfuse live on proxy

AI Generated Release Notes

New Contributors

Full Changelog: v1.11.1...v1.15.0

v1.11.1

07 Dec 18:25
Compare
Choose a tag to compare

Proxy

  • Bug fix for non OpenAI LLMs on proxy
  • Major stability improvements & Fixes + added test cases for proxy
  • Async success/failure loggers
  • Support for using custom loggers with aembedding()

What's Changed

New Contributors

Full Changelog: v1.10.4...v1.11.1

v1.10.4

05 Dec 05:07
Compare
Choose a tag to compare

Note: Proxy Server on 1.10.4 has a bug for non OpenAI LLMs - Fixed on 1.10.11

Updates Proxy Server

litellm Package

What's Changed

  • docs: adds gpt-3.5-turbo-1106 in supported models by @rishabgit in #958
  • (feat) Allow installing proxy dependencies explicitly with pip install litellm[proxy] by @PSU3D0 in #966
  • Mention Neon as a database option in docs by @Manouchehri in #977
  • fix system prompts for replicate by @nbaldwin98 in #970

New Contributors

Full Changelog: v1.7.11...v1.10.4

v1.7.11

29 Nov 05:52
Compare
Choose a tag to compare

💥 LiteLLM Router + Proxy handles 500+ requests/second

💥LiteLLM Proxy - Now handles 500+ requests/second, Load Balance Azure + OpenAI deployments, Track spend per user 💥
Try it here: https://docs.litellm.ai/docs/simple_proxy
🔑 Support for AZURE_OPENAI_API_KEY on Azure https://docs.litellm.ai/docs/providers/azure
h/t
@solyarisoftware
⚡️ LiteLLM Router can now handle 20% more throughput https://docs.litellm.ai/docs/routing
📖Improvement to litellm debugging docs h/t
@solyarisoftware
https://docs.litellm.ai/docs/debugging/local_debugging

Full Changelog: v1.7.1...v1.7.11

v1.7.1

25 Nov 23:21
Compare
Choose a tag to compare

What's Changed

  • 🚨 LiteLLM Proxy uses Async completion/embedding calls on this release onwards - this led to 30x more throughput for embedding/completion calls

New Contributors

Full Changelog: v1.1.0...v1.7.1