-
Notifications
You must be signed in to change notification settings - Fork 3.2k
Feat/litellm provider #2859
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: staging
Are you sure you want to change the base?
Feat/litellm provider #2859
Conversation
Greptile SummaryAdded LiteLLM as a new provider to enable users to connect their LiteLLM proxy server for accessing 100+ LLM providers through a unified OpenAI-compatible API. Key Changes:
Implementation Quality: Confidence Score: 5/5
Important Files Changed
Sequence DiagramsequenceDiagram
participant User
participant AgentBlock as Agent Block
participant LiteLLMProvider as LiteLLM Provider
participant LiteLLMProxy as LiteLLM Proxy Server
participant LLMBackend as Underlying LLM
User->>AgentBlock: Execute with litellm/model-name
AgentBlock->>LiteLLMProvider: executeRequest(request)
alt Initialization (first time)
LiteLLMProvider->>LiteLLMProxy: GET /v1/models
LiteLLMProxy-->>LiteLLMProvider: Available models list
LiteLLMProvider->>LiteLLMProvider: Store models with litellm/ prefix
end
LiteLLMProvider->>LiteLLMProvider: Strip litellm/ prefix from model
LiteLLMProvider->>LiteLLMProvider: Build OpenAI-compatible payload
alt Streaming Request
LiteLLMProvider->>LiteLLMProxy: POST /v1/chat/completions (stream=true)
LiteLLMProxy->>LLMBackend: Forward to actual provider
LLMBackend-->>LiteLLMProxy: Stream chunks
LiteLLMProxy-->>LiteLLMProvider: Stream chunks
LiteLLMProvider->>LiteLLMProvider: Create ReadableStream
LiteLLMProvider-->>AgentBlock: StreamingExecution
else Non-streaming with Tools
loop Tool Call Iterations
LiteLLMProvider->>LiteLLMProxy: POST /v1/chat/completions
LiteLLMProxy->>LLMBackend: Forward request
LLMBackend-->>LiteLLMProxy: Response with tool_calls
LiteLLMProxy-->>LiteLLMProvider: Response
LiteLLMProvider->>LiteLLMProvider: Execute tools locally
LiteLLMProvider->>LiteLLMProvider: Add tool results to messages
end
LiteLLMProvider-->>AgentBlock: ProviderResponse
end
AgentBlock-->>User: Execution result
|
Greptile's behavior is changing!From now on, if a review finishes with no comments, we will not post an additional "statistics" comment to confirm that our review found nothing to comment on. However, you can confirm that we reviewed your changes in the status check section. This feature can be toggled off in your Code Review Settings by deselecting "Create a status check for each PR". |
2ef1fd1 to
9c71f8a
Compare
|
Someone is attempting to deploy a commit to the Sim Team on Vercel. A member of the Team first needs to authorize it. |
|
@waleedlatif1 please help merge, thanks! |
Summary
Add LiteLLM as a new provider to enable users to connect their LiteLLM proxy server for accessing 100+ LLM providers through a unified OpenAI-compatible API. This is useful for users who want to use services like GitHub Copilot (request-based pricing) or other providers through LiteLLM.
Fixes #(issue number if applicable)
Type of Change
Testing
LITELLM_BASE_URL=http://localhost:4000in.envlitellm/prefix in Agent block model selectorChecklist
Changes
Agent blocks (Provider integration):
apps/sim/providers/litellm//api/providers/litellm/modelsLITELLM_BASE_URL,LITELLM_API_KEY(optional)Copilot integration: