Gateway streamlines requests to 200+ open & closed source models with a unified API. It is also production-ready with support for caching, fallbacks, retries, timeouts, loadbalancing, and can be edge-deployed for minimum latency.
✅ Blazing fast (9.9x faster) with a tiny footprint (~45kb installed)
✅ Load balance across multiple models, providers, and keys
✅ Fallbacks make sure your app stays resilient
✅ Automatic Retries with exponential fallbacks come by default
✅ Configurable Request Timeouts to easily handle unresponsive LLM requests
✅ Multimodal to support routing between Vision, TTS, STT, Image Gen, and more models
✅ Plug-in middleware as needed
✅ Battle tested over 300B tokens
✅ Enterprise-ready for enhanced security, scale, and custom deployments
- Run it Locally for complete control & customization
- Hosted by Portkey for quick setup without infrastructure concerns
- Enterprise On-Prem for advanced features and dedicated support
Run the following command in your terminal and it will spin up the Gateway on your local system:
npx @portkey-ai/gateway
Your AI Gateway is now running on http://localhost:8787 🚀
Gateway is also edge-deployment ready. Explore Cloudflare, Docker, AWS etc. deployment guides here.
This same open-source Gateway powers Portkey API that processes billions of tokens daily and is in production with companies like Postman, Haptik, Turing, MultiOn, SiteGPT, and more.
Sign up for the free developer plan (10K request/month) here or discuss here for enterprise deployments.
Gateway is fully compatible with the OpenAI API & SDK, and extends them to call 200+ LLMs and makes them reliable. To use the Gateway through OpenAI, you only need to update your base_URL
and pass the provider name in headers.
- To use through Portkey, set your
base_URL
to:https://api.portkey.ai/v1
- To run locally, set:
http://localhost:8787/v1
Let's see how we can use the Gateway to make an Anthropic request in OpenAI spec below - the same will follow for all the other providers.
pip install portkey-ai
While instantiating your OpenAI client,
- Set the
base_URL
tohttp://localhost:8787/v1
(orPORTKEY_GATEWAY_URL
through the Portkey SDK if you're using the hosted version) - Pass the provider name in the
default_headers
param (here we are usingcreateHeaders
method with the Portkey SDK to auto-create the full header)
from openai import OpenAI
from portkey_ai import PORTKEY_GATEWAY_URL, createHeaders
gateway = OpenAI(
api_key="ANTHROPIC_API_KEY",
base_url=PORTKEY_GATEWAY_URL, # Or http://localhost:8787/v1 when running locally
default_headers=createHeaders(
provider="anthropic",
api_key="PORTKEY_API_KEY" # Grab from https://app.portkey.ai # Not needed when running locally
)
)
chat_complete = gateway.chat.completions.create(
model="claude-3-sonnet-20240229",
messages=[{"role": "user", "content": "What's a fractal?"}],
max_tokens=512
)
If you want to run the Gateway locally, don't forget to run npx @portkey-ai/gateway
in your terminal before this! Otherwise just sign up on Portkey and keep your Portkey API Key handy.
Works the same as in Python. Add baseURL
& defaultHeaders
while instantiating your OpenAI client and pass the relevant provider details.
npm install portkey-ai
import OpenAI from 'openai';
import { PORTKEY_GATEWAY_URL, createHeaders } from 'portkey-ai';
const gateway = new OpenAI({
apiKey: 'ANTHROPIC_API_KEY',
baseURL: PORTKEY_GATEWAY_URL, // Or http://localhost:8787/v1 when running locally
defaultHeaders: createHeaders({
provider: 'anthropic',
apiKey: 'PORTKEY_API_KEY', // Grab from https://app.portkey.ai / Not needed when running locally
}),
});
async function main() {
const chatCompletion = await gateway.chat.completions.create({
messages: [{ role: 'user', content: 'Who are you?' }],
model: 'claude-3-sonnet-20240229',
max_tokens: 512,
});
console.log(chatCompletion.choices[0].message.content);
}
main();
In your OpenAI REST request,
- Change the request URL to
https://api.portkey.ai/v1
(orhttp://localhost:8787/v1
if you're hosting locally) - Pass an additional
x-portkey-provider
header with the provider's name - Change the model's name to
claude-3
curl 'http://localhost:8787/v1/chat/completions' \
-H 'x-portkey-provider: anthropic' \
-H "Authorization: Bearer $ANTHROPIC_API_KEY" \
-H 'Content-Type: application/json' \
-d '{ "model": "claude-3-haiku-20240229", "messages": [{"role": "user","content": "Hi"}] }'
For other providers, change the provider
& model
to their respective values.
- Run Gateway on prompts from Langchain hub
- Use Porkey Gateway with Vercel's AI SDK
- Set up fallback from SDXL to Dall-E-3
- Comparing Top 10 LMSYS Models with Portkey
- Fallback from OpenAI to Azure OpenAI
- Set up automatic retries for failed requests
- Call Llama 3 on Groq
Explpore Gateway integrations with 20+ providers and 6+ frameworks.
Provider | Support | Stream | |
---|---|---|---|
OpenAI | ✅ | ✅ | |
Azure OpenAI | ✅ | ✅ | |
Anyscale | ✅ | ✅ | |
Google Gemini & Palm | ✅ | ✅ | |
Anthropic | ✅ | ✅ | |
Cohere | ✅ | ✅ | |
Together AI | ✅ | ✅ | |
Perplexity | ✅ | ✅ | |
Mistral | ✅ | ✅ | |
Nomic | ✅ | ✅ | |
AI21 | ✅ | ✅ | |
Stability AI | ✅ | ✅ | |
DeepInfra | ✅ | ✅ | |
Ollama | ✅ | ✅ | |
Novita AI | ✅ | ✅ |
Reliability features are set by passing a relevant Gateway Config (JSON) with the x-portkey-config
header or with the config
param in the SDKs
{
"strategy": { "mode": "fallback" },
"targets": [
{ "provider": "openai", "api_key": "OPENAI_API_KEY" },
{ "provider": "anthropic", "api_key": "ANTHROPIC_API_KEY" }
]
}
Portkey Gateway will automatically trigger Anthropic if the OpenAI request fails:
REST
curl 'http://localhost:8787/v1/chat/completions' \
-H 'x-portkey-provider: google' \
-H 'x-portkey-config: $CONFIG' \
-H "Authorization: Bearer $GOOGLE_AI_STUDIO_KEY" \
-H 'Content-Type: application/json' \
-d '{ "model": "gemini-1.5-pro-latest", "messages": [{"role": "user","content": "Hi"}] }'
You can also trigger Fallbacks only on specific status codes by passing an array of status codes with the on_status_codes
param in strategy
.
Read the full Fallback documentation here.
{
"strategy": { "mode": "loadbalance" },
"targets": [
{ "provider": "openai", "api_key": "ACCOUNT_1_KEY", "weight": 1 },
{ "provider": "openai", "api_key": "ACCOUNT_2_KEY", "weight": 1 },
{ "provider": "openai", "api_key": "ACCOUNT_3_KEY", "weight": 1 }
]
}
import OpenAI from 'openai';
import { PORTKEY_GATEWAY_URL, createHeaders } from 'portkey-ai'
const gateway = new OpenAI({
baseURL: PORTKEY_GATEWAY_URL,
defaultHeaders: createHeaders({
apiKey: "PORTKEY_API_KEY",
config: "CONFIG_ID"
})
});
Read the Loadbalancing docs here.
Similarly, you can write a Config that will attempt retries up to 5 times
{
"retry": { "attempts": 5 }
}
Here, the request timeout of 10 seconds will be applied to *all* the targets.
{
"strategy": { "mode": "fallback" },
"request_timeout": 10000,
"targets": [
{ "virtual_key": "open-ai-xxx" },
{ "virtual_key": "azure-open-ai-xxx" }
]
}
Here's a guide to use the config object in your request.
Language | Supported SDKs |
---|---|
Node.js / JS / TS | Portkey SDK OpenAI SDK LangchainJS LlamaIndex.TS |
Python | Portkey SDK OpenAI SDK Langchain LlamaIndex |
Go | go-openai |
Java | openai-java |
Rust | async-openai |
Ruby | ruby-openai |
See docs on installing the AI Gateway locally or deploying it on popular locations.
- Deploy to App Stack
- Deploy to Cloudflare Workers
- Deploy using Docker
- Deploy using Docker Compose
- Deploy to Zeabur
- Run a Node.js server
Make your AI app more reliable and forward compatible, while ensuring complete data security and privacy.
✅ Secure Key Management - for role-based access control and tracking
✅ Simple & Semantic Caching - to serve repeat queries faster & save costs
✅ Access Control & Inbound Rules - to control which IPs and Geos can connect to your deployments
✅ PII Redaction - to automatically remove sensitive data from your requests to prevent indavertent exposure
✅ SOC2, ISO, HIPAA, GDPR Compliances - for best security practices
✅ Professional Support - along with feature prioritization
Schedule a call to discuss enterprise deployments
The easiest way to contribute is to pick any issue with the good first issue
tag 💪. Read the Contributing guidelines here.
Bug Report? File here | Feature Request? File here
Join our growing community around the world, for help, ideas, and discussions on AI.