End-to-end open-source voice agents platform: Quickly build LLM based voice driven conversational applications
Bolna is the end-to-end open source production ready framework for quickly building LLM based voice driven conversational applications.
demo-create-agent-and-make-calls.mp4
Bolna helps you create AI Voice Agents which can be instructed to do tasks beginning with:
- Initiating a phone call using telephony providers like
Twilio
,Exotel
, etc. - Transcribing the conversations using
Deepgram
, etc. - Using LLMs like
OpenAI
,Llama
,Cohere
,Mistral
, etc to handle conversations - Synthesizing LLM responses back to telephony using
AWS Polly
,XTTS
,ElevenLabs
,Deepgram
etc. - Instructing the Agent to perform tasks like sending emails, text messages, booking calendar after the conversation has ended
Refer to the docs for a deepdive into all supported providers.
A basic local setup uses Twilio
for telephony. We have dockerized the setup in local_setup/
. One will need to populate an environment .env
file from .env.sample
.
The setup consists of four containers:
- Twilio web server: for initiating the calls one will need to set up a Twilio account
- Bolna server: for creating and handling agents
ngrok
: for tunneling. One will need to add theauthtoken
tongrok-config.yml
redis
: for persisting agents & prompt data
Running docker-compose up --build
will use the .env
as the environment file.
Once the docker containers are up, you can now start to create your agents and instruct them to initiate calls.
Once you have the above docker setup and running, you can create agents and initiate calls.
- Refer to the official
Agent
API to create an agent - Initiate a call via API similar to
Call
API to receive a call
You can populate the .env
file to use your own keys for providers.
ASR Providers
These are the current supported ASRs Providers:
Provider | Environment variable to be added in .env file |
---|---|
Deepgram | DEEPGRAM_AUTH_TOKEN |
LLM Providers
Bolna uses LiteLLM package to support multiple LLM integrations.
These are the current supported LLM Provider Family: https://github.com/bolna-ai/bolna/blob/c8a0d1428793d4df29133119e354bc2f85a7ca76/bolna/providers.py#L19-L28
For LiteLLM based LLMs, add either of the following to the .env
file depending on your use-case:
LITELLM_MODEL_API_KEY
: API Key of the LLM
LITELLM_MODEL_API_BASE
: URL of the hosted LLM
LITELLM_MODEL_API_VERSION
: API VERSION for LLMs like Azure
For LLMs hosted via VLLM, add the following to the .env
file:
VLLM_SERVER_BASE_URL
: URL of the hosted LLM using VLLM
TTS Providers
These are the current supported TTS Providers: https://github.com/bolna-ai/bolna/blob/c8a0d1428793d4df29133119e354bc2f85a7ca76/bolna/providers.py#L7-L14
Provider | Environment variable to be added in .env file |
---|---|
AWS Polly | Accessed from system wide credentials via ~/.aws |
Elevenlabs | ELEVENLABS_API_KEY |
OpenAI | OPENAI_API_KEY |
Deepgram | DEEPGRAM_AUTH_TOKEN |
In case you wish to extend and add some other Telephony like Vonage, Telnyx, etc. following the guidelines below:
- Make sure bi-directional streaming is supported by the Telephony provider
- Add the telephony-specific input handler file in input_handlers/telephony_providers writing custom functions extending from the telephony.py class
- This file will mainly contain how different types of event packets are being ingested from the telephony provider
- Add telephony-specific output handler file in output_handlers/telephony_providers writing custom functions extending from the telephony.py class
- This mainly concerns converting audio from the synthesizer class to a supported audio format and streaming it over the websocket provided by the telephony provider
- Lastly, you'll have to write a dedicated server like the example twilio_api_server.py provided in local_setup to initiate calls over websockets.
Though the repository is completely open source, you can connect with us if interested in managed hosted offerings or more customized solutions.
We love all types of contributions: whether big or small helping in improving this community resource.
- There are a number of open issues present which can be good ones to start with
- If you have suggestions for enhancements, wish to contribute a simple fix such as correcting a typo, or want to address an apparent bug, please feel free to initiate a new issue or submit a pull request
- If you're contemplating a larger change or addition to this repository, be it in terms of its structure or the features, kindly begin by creating a new issue open a new issue and outline your proposed changes. This will allow us to engage in a discussion before you dedicate a significant amount of time or effort. Your cooperation and understanding are appreciated