Skip to content

Commit

Permalink
Merge branch 'BerriAI:main' into main
Browse files Browse the repository at this point in the history
  • Loading branch information
hughcrt authored Jun 17, 2024
2 parents e0608ea + 3a35a58 commit 722e89b
Show file tree
Hide file tree
Showing 130 changed files with 24,735 additions and 18,003 deletions.
22 changes: 19 additions & 3 deletions .github/workflows/ghcr_deploy.yml
Original file line number Diff line number Diff line change
Expand Up @@ -25,6 +25,11 @@ jobs:
if: github.repository == 'BerriAI/litellm'
runs-on: ubuntu-latest
steps:
-
name: Checkout
uses: actions/checkout@v4
with:
ref: ${{ github.event.inputs.commit_hash }}
-
name: Set up QEMU
uses: docker/setup-qemu-action@v3
Expand All @@ -41,19 +46,22 @@ jobs:
name: Build and push
uses: docker/build-push-action@v5
with:
context: .
push: true
tags: litellm/litellm:${{ github.event.inputs.tag || 'latest' }}
-
name: Build and push litellm-database image
uses: docker/build-push-action@v5
with:
context: .
push: true
file: Dockerfile.database
tags: litellm/litellm-database:${{ github.event.inputs.tag || 'latest' }}
-
name: Build and push litellm-spend-logs image
uses: docker/build-push-action@v5
with:
context: .
push: true
file: ./litellm-js/spend-logs/Dockerfile
tags: litellm/litellm-spend_logs:${{ github.event.inputs.tag || 'latest' }}
Expand All @@ -68,6 +76,8 @@ jobs:
steps:
- name: Checkout repository
uses: actions/checkout@v4
with:
ref: ${{ github.event.inputs.commit_hash }}
# Uses the `docker/login-action` action to log in to the Container registry registry using the account and password that will publish the packages. Once published, the packages are scoped to the account defined here.
- name: Log in to the Container registry
uses: docker/login-action@65b78e6e13532edd9afa3aa52ac7964289d1a9c1
Expand All @@ -92,7 +102,7 @@ jobs:
- name: Build and push Docker image
uses: docker/build-push-action@4976231911ebf5f32aad765192d35f942aa48cb8
with:
context: https://github.com/BerriAI/litellm.git#${{ github.event.inputs.commit_hash}}
context: .
push: true
tags: ${{ steps.meta.outputs.tags }}-${{ github.event.inputs.tag || 'latest' }}, ${{ steps.meta.outputs.tags }}-${{ github.event.inputs.release_type }} # if a tag is provided, use that, otherwise use the release tag, and if neither is available, use 'latest'
labels: ${{ steps.meta.outputs.labels }}
Expand All @@ -106,6 +116,8 @@ jobs:
steps:
- name: Checkout repository
uses: actions/checkout@v4
with:
ref: ${{ github.event.inputs.commit_hash }}

- name: Log in to the Container registry
uses: docker/login-action@65b78e6e13532edd9afa3aa52ac7964289d1a9c1
Expand All @@ -128,7 +140,7 @@ jobs:
- name: Build and push Database Docker image
uses: docker/build-push-action@f2a1d5e99d037542a71f64918e516c093c6f3fc4
with:
context: https://github.com/BerriAI/litellm.git#${{ github.event.inputs.commit_hash}}
context: .
file: Dockerfile.database
push: true
tags: ${{ steps.meta-database.outputs.tags }}-${{ github.event.inputs.tag || 'latest' }}, ${{ steps.meta-database.outputs.tags }}-${{ github.event.inputs.release_type }}
Expand All @@ -143,6 +155,8 @@ jobs:
steps:
- name: Checkout repository
uses: actions/checkout@v4
with:
ref: ${{ github.event.inputs.commit_hash }}

- name: Log in to the Container registry
uses: docker/login-action@65b78e6e13532edd9afa3aa52ac7964289d1a9c1
Expand All @@ -165,7 +179,7 @@ jobs:
- name: Build and push Database Docker image
uses: docker/build-push-action@f2a1d5e99d037542a71f64918e516c093c6f3fc4
with:
context: https://github.com/BerriAI/litellm.git#${{ github.event.inputs.commit_hash}}
context: .
file: ./litellm-js/spend-logs/Dockerfile
push: true
tags: ${{ steps.meta-spend-logs.outputs.tags }}-${{ github.event.inputs.tag || 'latest' }}, ${{ steps.meta-spend-logs.outputs.tags }}-${{ github.event.inputs.release_type }}
Expand All @@ -176,6 +190,8 @@ jobs:
steps:
- name: Checkout repository
uses: actions/checkout@v4
with:
ref: ${{ github.event.inputs.commit_hash }}

- name: Log in to the Container registry
uses: docker/login-action@65b78e6e13532edd9afa3aa52ac7964289d1a9c1
Expand Down
30 changes: 22 additions & 8 deletions .pre-commit-config.yaml
Original file line number Diff line number Diff line change
@@ -1,4 +1,19 @@
repos:
- repo: local
hooks:
# - id: mypy
# name: mypy
# entry: python3 -m mypy --ignore-missing-imports
# language: system
# types: [python]
# files: ^litellm/
- id: isort
name: isort
entry: isort
language: system
types: [python]
files: litellm/.*\.py
exclude: ^litellm/__init__.py$
- repo: https://github.com/psf/black
rev: 24.2.0
hooks:
Expand All @@ -16,11 +31,10 @@ repos:
name: Check if files match
entry: python3 ci_cd/check_files_match.py
language: system
- repo: local
hooks:
- id: mypy
name: mypy
entry: python3 -m mypy --ignore-missing-imports
language: system
types: [python]
files: ^litellm/
# - id: check-file-length
# name: Check file length
# entry: python check_file_length.py
# args: ["10000"] # set your desired maximum number of lines
# language: python
# files: litellm/.*\.py
# exclude: ^litellm/tests/
28 changes: 28 additions & 0 deletions check_file_length.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,28 @@
import sys


def check_file_length(max_lines, filenames):
bad_files = []
for filename in filenames:
with open(filename, "r") as file:
lines = file.readlines()
if len(lines) > max_lines:
bad_files.append((filename, len(lines)))
return bad_files


if __name__ == "__main__":
max_lines = int(sys.argv[1])
filenames = sys.argv[2:]

bad_files = check_file_length(max_lines, filenames)
if bad_files:
bad_files.sort(
key=lambda x: x[1], reverse=True
) # Sort files by length in descending order
for filename, length in bad_files:
print(f"{filename}: {length} lines")

sys.exit(1)
else:
sys.exit(0)
2 changes: 1 addition & 1 deletion docs/my-website/docs/completion/input.md
Original file line number Diff line number Diff line change
Expand Up @@ -162,7 +162,7 @@ def completion(

- `function`: *object* - Required.

- `tool_choice`: *string or object (optional)* - Controls which (if any) function is called by the model. none means the model will not call a function and instead generates a message. auto means the model can pick between generating a message or calling a function. Specifying a particular function via {"type: "function", "function": {"name": "my_function"}} forces the model to call that function.
- `tool_choice`: *string or object (optional)* - Controls which (if any) function is called by the model. none means the model will not call a function and instead generates a message. auto means the model can pick between generating a message or calling a function. Specifying a particular function via `{"type: "function", "function": {"name": "my_function"}}` forces the model to call that function.

- `none` is the default when no functions are present. `auto` is the default if functions are present.

Expand Down
90 changes: 0 additions & 90 deletions docs/my-website/docs/debugging/hosted_debugging.md
Original file line number Diff line number Diff line change
@@ -1,90 +0,0 @@
import Image from '@theme/IdealImage';
import QueryParamReader from '../../src/components/queryParamReader.js'

# [Beta] Monitor Logs in Production

:::note

This is in beta. Expect frequent updates, as we improve based on your feedback.

:::

LiteLLM provides an integration to let you monitor logs in production.

👉 Jump to our sample LiteLLM Dashboard: https://admin.litellm.ai/


<Image img={require('../../img/alt_dashboard.png')} alt="Dashboard" />

## Debug your first logs
<a target="_blank" href="https://colab.research.google.com/github/BerriAI/litellm/blob/main/cookbook/liteLLM_OpenAI.ipynb">
<img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/>
</a>


### 1. Get your LiteLLM Token

Go to [admin.litellm.ai](https://admin.litellm.ai/) and copy the code snippet with your unique token

<Image img={require('../../img/hosted_debugger_usage_page.png')} alt="Usage" />

### 2. Set up your environment

**Add it to your .env**

```python
import os

os.env["LITELLM_TOKEN"] = "e24c4c06-d027-4c30-9e78-18bc3a50aebb" # replace with your unique token

```

**Turn on LiteLLM Client**
```python
import litellm
litellm.client = True
```

### 3. Make a normal `completion()` call
```python
import litellm
from litellm import completion
import os

# set env variables
os.environ["LITELLM_TOKEN"] = "e24c4c06-d027-4c30-9e78-18bc3a50aebb" # replace with your unique token
os.environ["OPENAI_API_KEY"] = "openai key"

litellm.use_client = True # enable logging dashboard
messages = [{ "content": "Hello, how are you?","role": "user"}]

# openai call
response = completion(model="gpt-3.5-turbo", messages=messages)
```

Your `completion()` call print with a link to your session dashboard (https://admin.litellm.ai/<your_unique_token>)

In the above case it would be: [`admin.litellm.ai/e24c4c06-d027-4c30-9e78-18bc3a50aebb`](https://admin.litellm.ai/e24c4c06-d027-4c30-9e78-18bc3a50aebb)

Click on your personal dashboard link. Here's how you can find it 👇

<Image img={require('../../img/dash_output.png')} alt="Dashboard" />

[👋 Tell us if you need better privacy controls](https://calendly.com/d/4mp-gd3-k5k/berriai-1-1-onboarding-litellm-hosted-version?month=2023-08)

### 3. Review request log

Oh! Looks like our request was made successfully. Let's click on it and see exactly what got sent to the LLM provider.




Ah! So we can see that this request was made to a **Baseten** (see litellm_params > custom_llm_provider) for a model with ID - **7qQNLDB** (see model). The message sent was - `"Hey, how's it going?"` and the response received was - `"As an AI language model, I don't have feelings or emotions, but I can assist you with your queries. How can I assist you today?"`

<Image img={require('../../img/dashboard_log.png')} alt="Dashboard Log Row" />

:::info

🎉 Congratulations! You've successfully debugger your first log!

:::
8 changes: 5 additions & 3 deletions docs/my-website/docs/observability/langfuse_integration.md
Original file line number Diff line number Diff line change
Expand Up @@ -122,6 +122,7 @@ response = completion(
metadata={
"generation_name": "ishaan-test-generation", # set langfuse Generation Name
"generation_id": "gen-id22", # set langfuse Generation ID
"parent_observation_id": "obs-id9" # set langfuse Parent Observation ID
"version": "test-generation-version" # set langfuse Generation Version
"trace_user_id": "user-id2", # set langfuse Trace User ID
"session_id": "session-1", # set langfuse Session ID
Expand Down Expand Up @@ -190,9 +191,10 @@ The following parameters can be updated on a continuation of a trace by passing

#### Generation Specific Parameters

* `generation_id` - Identifier for the generation, auto-generated by default
* `generation_name` - Identifier for the generation, auto-generated by default
* `prompt` - Langfuse prompt object used for the generation, defaults to None
* `generation_id` - Identifier for the generation, auto-generated by default
* `generation_name` - Identifier for the generation, auto-generated by default
* `parent_observation_id` - Identifier for the parent observation, defaults to `None`
* `prompt` - Langfuse prompt object used for the generation, defaults to `None`

Any other key value pairs passed into the metadata not listed in the above spec for a `litellm` completion will be added as a metadata key value pair for the generation.

Expand Down
2 changes: 2 additions & 0 deletions docs/my-website/docs/observability/promptlayer_integration.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,5 @@
import Image from '@theme/IdealImage';

# Promptlayer Tutorial

Promptlayer is a platform for prompt engineers. Log OpenAI requests. Search usage history. Track performance. Visually manage prompt templates.
Expand Down
3 changes: 3 additions & 0 deletions docs/my-website/docs/providers/text_completion_openai.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,6 @@
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';

# OpenAI (Text Completion)

LiteLLM supports OpenAI text completion models
Expand Down
2 changes: 1 addition & 1 deletion docs/my-website/docs/providers/togetherai.md
Original file line number Diff line number Diff line change
Expand Up @@ -208,7 +208,7 @@ print(response)

Instead of using the `custom_llm_provider` arg to specify which provider you're using (e.g. together ai), you can just pass the provider name as part of the model name, and LiteLLM will parse it out.

Expected format: <custom_llm_provider>/<model_name>
Expected format: `<custom_llm_provider>/<model_name>`

e.g. completion(model="together_ai/togethercomputer/Llama-2-7B-32K-Instruct", ...)

Expand Down
Loading

0 comments on commit 722e89b

Please sign in to comment.