Skip to content

v1.22.10

Compare
Choose a tag to compare
@github-actions github-actions released this 07 Feb 02:54
· 11613 commits to main since this release

What's Changed

  • fix(proxy_server.py): do a health check on db before returning if proxy ready (if db connected) by @krrishdholakia in #1856
  • fix(utils.py): return finish reason for last vertex ai chunk by @krrishdholakia in #1847
  • fix(proxy/utils.py): if langfuse trace id passed in, include in slack alert by @krrishdholakia in #1839
  • [Feat] Budgets for 'user' param passed to /chat/completions, /embeddings etc by @ishaan-jaff in #1859

Semantic Caching Support - Add Semantic Caching to litellm💰 by @ishaan-jaff in #1829

Usage with Proxy

Step 1: Add cache to the config.yaml

model_list:
  - model_name: gpt-3.5-turbo
    litellm_params:
      model: gpt-3.5-turbo
  - model_name: azure-embedding-model
    litellm_params:
      model: azure/azure-embedding-model
      api_base: os.environ/AZURE_API_BASE
      api_key: os.environ/AZURE_API_KEY
      api_version: "2023-07-01-preview"

litellm_settings:
  set_verbose: True
  cache: True          # set cache responses to True, litellm defaults to using a redis cache
  cache_params:
    type: "redis-semantic"  
    similarity_threshold: 0.8   # similarity threshold for semantic cache
    redis_semantic_cache_embedding_model: azure-embedding-model # set this to a model_name set in model_list

Step 2: Add Redis Credentials to .env

Set either REDIS_URL or the REDIS_HOST in your os environment, to enable caching.

REDIS_URL = ""        # REDIS_URL='redis://username:password@hostname:port/database'
## OR ## 
REDIS_HOST = ""       # REDIS_HOST='redis-18841.c274.us-east-1-3.ec2.cloud.redislabs.com'
REDIS_PORT = ""       # REDIS_PORT='18841'
REDIS_PASSWORD = ""   # REDIS_PASSWORD='liteLlmIsAmazing'

Additional kwargs
You can pass in any additional redis.Redis arg, by storing the variable + value in your os environment, like this:

REDIS_<redis-kwarg-name> = ""

Step 3: Run proxy with config

$ litellm --config /path/to/config.yaml

That's IT !

(You'll see semantic-similarity on langfuse if you set langfuse as a success_callback)
(FYI the api key here is deleted 🔑)

Screenshot 2024-02-06 at 11 15 01 AM

Usage with litellm.completion

litellm.cache = Cache(
        type="redis-semantic",
        host=os.environ["REDIS_HOST"],
        port=os.environ["REDIS_PORT"],
        password=os.environ["REDIS_PASSWORD"],
        similarity_threshold=0.8,
        redis_semantic_cache_embedding_model="text-embedding-ada-002",
  )
  response1 = completion(
      model="gpt-3.5-turbo",
      messages=[
          {
              "role": "user",
              "content": f"write a one sentence poem about: {random_number}",
          }
      ],
      max_tokens=20,
  )
  print(f"response1: {response1}")

  random_number = random.randint(1, 100000)

  response2 = completion(
      model="gpt-3.5-turbo",
      messages=[
          {
              "role": "user",
              "content": f"write a one sentence poem about: {random_number}",
          }
      ],
      max_tokens=20,
  )
  print(f"response2: {response1}")
  assert response1.id == response2.id

Budgets for 'user' param passed to /chat/completions, /embeddings etc

budget user passed to /chat/completions, without needing to create a key for every user passed
docs: https://docs.litellm.ai/docs/proxy/users

How to Use

  1. Define a litellm.max_user_budget on your confg
litellm_settings:
  max_budget: 10      # global budget for proxy 
  max_user_budget: 0.0001 # budget for 'user' passed to /chat/completions
  1. Make a /chat/completions call, pass 'user' - First call Works
curl --location 'http://0.0.0.0:4000/chat/completions' \
        --header 'Content-Type: application/json' \
        --header 'Authorization: Bearer sk-zi5onDRdHGD24v0Zdn7VBA' \
        --data ' {
        "model": "azure-gpt-3.5",
        "user": "ishaan3",
        "messages": [
            {
            "role": "user",
            "content": "what time is it"
            }
        ]
        }'
  1. Make a /chat/completions call, pass 'user' - Call Fails, since 'ishaan3' over budget
curl --location 'http://0.0.0.0:4000/chat/completions' \
        --header 'Content-Type: application/json' \
        --header 'Authorization: Bearer sk-zi5onDRdHGD24v0Zdn7VBA' \
        --data ' {
        "model": "azure-gpt-3.5",
        "user": "ishaan3",
        "messages": [
            {
            "role": "user",
            "content": "what time is it"
            }
        ]
        }'

Error

{"error":{"message":"Authentication Error, ExceededBudget: User ishaan3 has exceeded their budget. Current spend: 0.0008869999999999999; Max Budget: 0.0001","type":"auth_error","param":"None","code":401}}%                

Full Changelog: v1.22.9...v1.22.10