A FastAPI server that translates OpenAI API requests to YandexGPT and YandexART API requests. This enables you to use tools and applications designed for OpenAI's API with Yandex's language and image generation models.
English | Russian
- Text Generation: Translates OpenAI chat completion requests to YandexGPT
- ✅ Streaming support
- ✅ Tools/Function calling - with YandexGPT 4 models (for now only
yandexgpt/rc
) - ⬜ Vision (not supported)
- Text Embeddings: Converts embedding requests to Yandex's text vectorization models
- ✅ Supports both
float
andbase64
encoding formats
- ✅ Supports both
- Image Generation: Translates DALL-E style requests to YandexART
- ✅ Supports both base64 and URL response formats
- ✅ Configurable aspect ratios
- ❌ Multiple images per request (limited to 1)
- A Yandex Cloud account
- API key and catalog ID from Yandex Cloud
- Required IAM roles:
ai.languageModels.user
(for YandexGPT)ai.imageGeneration.user
(for YandexART)
git clone https://github.com/sazonovanton/YandexGPT_to_OpenAI
cd YandexGPT_to_OpenAI
The server supports two authentication methods:
Generate tokens that users can use to access the API:
python utils/tokens.py
Tokens will be stored in data/tokens.json
Allow users to provide their own Yandex Cloud credentials in the format:
<CatalogID>:<SecretKey>
- Configure environment variables in
docker-compose.yml
:
environment:
- Y2O_SecretKey=your_secret_key
- Y2O_CatalogID=your_catalog_id
- Y2O_BringYourOwnKey=false
- Y2O_ServerURL=http://127.0.0.1:8520
- Y2O_LogFile=logs/y2o.log
- Y2O_LogLevel=INFO
- Start the server:
docker-compose up -d
- Install dependencies:
pip install -r requirements.txt
- Create a
.env
file with configuration:
Y2O_SecretKey=your_secret_key
Y2O_CatalogID=your_catalog_id
Y2O_BringYourOwnKey=false
Y2O_Host=127.0.0.1
Y2O_Port=8520
Y2O_ServerURL=http://127.0.0.1:8520
Y2O_LogFile=logs/y2o.log
Y2O_LogLevel=INFO
- Start the server:
python app.py
To enable SSL, set the following environment variables:
Y2O_SSL_Key=ssl/private.key
Y2O_SSL_Cert=ssl/cert.pem
You can test API with your own keys (see BYOK) setting base URL https://sazonovanton.online:8520/v1
.
Logging level is set to INFO
, your keys will not be stored.
import os
from openai import OpenAI
client = OpenAI(
api_key=os.getenv("TOKEN"),
base_url="http://<your_host>:<your_port>/v1",
)
chat_completion = client.chat.completions.create(
messages=[
{
"role": "user",
"content": "Say this is a test",
}
],
model="yandexgpt/latest",
)
curl http://<your_host>:<your_port>/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $TOKEN" \
-d '{
"model": "yandexgpt/latest",
"messages": [
{
"role": "system",
"content": "You are a helpful assistant."
},
{
"role": "user",
"content": "Hello!"
}
]
}'
response = client.images.generate(
model="yandex-art/latest",
prompt="A painting of a cat",
response_format="b64_json"
)
curl http://<your_host>:<your_port>/v1/images/generations \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $TOKEN" \
-d '{
"model": "yandex-art/latest",
"prompt": "A painting of a cat",
"response_format": "url"
}'
curl -H "Authorization: Bearer $TOKEN" -H "Content-Type: application/json" -O http://<your_host>:<your_port>/images/<id>.jpg
response = client.embeddings.create(
model="text-search-query/latest",
input=["Your text here"],
encoding_format="float"
)
Variable | Description | Default |
---|---|---|
Y2O_SecretKey | Yandex Cloud API key | None |
Y2O_CatalogID | Yandex Cloud catalog ID | None |
Y2O_BringYourOwnKey | Allow users to provide their own credentials | False |
Y2O_Host | Server host | 127.0.0.1 |
Y2O_Port | Server port | 8520 |
Y2O_ServerURL | Public server URL for image download | http://127.0.0.1:8520 |
Y2O_LogFile | Log file path | logs/y2o.log |
Y2O_LogLevel | Logging level | INFO |
Y2O_SSL_Key | SSL private key path | None |
Y2O_SSL_Cert | SSL certificate path | None |
Y2O_CORS_Origins | Allowed CORS origins | * |
Y2O_TestToken | Test token for utils/test.py (dev) | None |
The translator supports automatic model name mapping from OpenAI to Yandex Foundation Models. However, this models may not have direct equivalents. It's recommended to use Yandex model names directly (e.g., yandexgpt/latest
).
The following aliases are supported:
gpt-3.5*
→yandexgpt-lite/latest
*mini*
→yandexgpt-lite/latest
gpt-4*
→yandexgpt/latest
text-embedding-3-large
→text-search-doc/latest
text-embedding-3-small
→text-search-query/latest
text-embedding-ada-002
→text-search-query/latest
dall-e*
→yandex-art/latest