LiveKit complements Groq's fast AI inference with real-time communication features. This integration enables you to build end-to-end AI voice applications with:
- Complete Voice Pipeline: Combine Groq's fast and accurate text, speech-to-text (STT), and text-to-speech (TTS) models with LiveKit's infrastructure
- Real-time Communication: Enable multi-user voice interactions with LiveKit's WebRTC infrastructure
- Scalable Architecture: Handle thousands of concurrent users with LiveKit's distributed system
- Web Search Enabled: This template uses Groq's
compound-minimodel with built-in web search capabilities. Try asking questions like "What's the weather in San Francisco?" or "What are today's top headlines?" and watch it fetch real-time information from the web!
This repository is a complete starter template for building end-to-end voice AI assistants with natural voice conversations and sub-second response times using models hosted on Groq and LiveKit's real-time media platform.
Run this template locally - Follow the setup instructions below to get your voice assistant running on your machine in minutes.
This application demonstrates how to build a production-ready voice AI assistant using Groq API for ultra-fast speech processing and LiveKit for real-time audio streaming. Built as a complete, end-to-end template that you can fork, customize, and deploy.
LiveKit is a real-time communication infrastructure (think Zoom or Google Meet) that handles the complex networking, audio processing, and media routing between users and AI agents. Here's how the architecture flows:
- User speaks → Audio captured by frontend client
- LiveKit routes audio → Streams to your Python AI agent in real-time
- AI agent processes → Groq converts speech→text→LLM response→speech
- LiveKit streams back → Audio response delivered to user instantly
This means you need both components running simultaneously:
- AI Agent (Python backend) - Processes voice using Groq models
- Frontend Client (React app) - Handles user interface and audio I/O
You'll need a free LiveKit Cloud account to handle the real-time media infrastructure:
- Sign up at LiveKit Cloud
- Create a new project
- Get your API credentials from the project settings
Key Features:
- Sub-second response times with Groq's inference
- Real-time voice streaming via LiveKit's infrastructure
- Production-ready noise cancellation and turn detection
- Modern React UI with real-time transcription display
- Efficient concurrent request handling powered by Groq
Tech Stack:
- Frontend: Next.js 14, React, TypeScript, Tailwind CSS
- Backend: Python, LiveKit Agents SDK
- AI Infrastructure: Groq API (STT, LLM, TTS)
- Real-time Media: LiveKit Cloud
AI Pipeline:
- Speech-to-Text: Groq Whisper Large V3 Turbo
- Language Model: Groq Llama 3.1 8B Instant
- Text-to-Speech: Groq Elevenlabs TTS
- Voice Activity Detection: Silero VAD
- Turn Detection: Multilingual model
- uv - Modern Python package manager (will auto-install Python 3.11)
- Node.js 18+ and npm
- Groq API key (Get your free API key here)
- LiveKit Cloud account (Sign up for free)
- ElevenLabs API key (Sign up for free)
Note on Python Version: This project requires Python 3.10 or 3.11 due to the
av(PyAV) package dependency. Usinguvwill automatically download and use Python 3.11 for you!
gh repo clone janzheng/groq-livekit-template
cd groq-livekit-templateThe LiveKit CLI provides convenient utilities for testing and managing your setup, including helping you get your project credentials:
Install the CLI:
# macOS
brew update && brew install livekit-cli
# Linux/Windows - see https://github.com/livekit/livekit-cli for other installation methodsAuthenticate with LiveKit Cloud:
lk cloud authThis allows you to use CLI commands without manually providing credentials each time, and gives you access to additional testing and debugging tools.
uv provides modern Python dependency management, similar to npm install. Best part: uv will automatically download and use Python 3.11 for you - no need to manually install Python!
# Install uv (if not already installed)
# On macOS/Linux:
curl -LsSf https://astral.sh/uv/install.sh | sh
# On Windows:
powershell -c "irm https://astral.sh/uv/install.ps1 | iex"
# uv will automatically download Python 3.11 and install all dependencies
uv sync --no-install-projectDownload model files: To use the turn-detector, silero, or noise-cancellation plugins, you first need to download the model files:
uv run python agent.py download-filesTest your agent (optional): Start your agent in console mode to test it in your terminal:
uv run python agent.py consoleYour agent speaks to you in the terminal, and you can speak to it as well. This is a great way to test that everything is working before setting up the frontend. With uv run, you don't need to activate any virtual environment - it handles everything automatically!
In a new terminal tab, navigate to frontend:
cd voice-assistant-frontendInstall Node.js dependencies:
npm installCreate .env in the root directory (for the AI agent):
You can also copy the example file and then edit it with your credentials:
cp .env.example .envOr create .env manually with the following content:
# LiveKit credentials (get from LiveKit Cloud dashboard)
LIVEKIT_URL=wss://your-project.livekit.cloud
LIVEKIT_API_KEY=your-api-key
LIVEKIT_API_SECRET=your-api-secret
# Groq API key (get from Groq Console)
GROQ_API_KEY=your-groq-api-key
ELEVEN_API_KEY=elevenlabs-api-key
**You also need to create a `.env.local` in the `voice-assistant-frontend/` directory:**
```bash
# Enviroment variables needed to connect to the LiveKit server.
LIVEKIT_URL=wss://your-project.livekit.cloud
LIVEKIT_API_KEY=your-api-key
LIVEKIT_API_SECRET=your-api-secret
Terminal 1 - Start the AI Agent:
# Make sure you're in the root directory
uv run python agent.py devTerminal 2 - Start the Frontend:
# In the voice-assistant-frontend directory
cd voice-assistant-frontend
npm run devOpen your browser to http://localhost:3000 and start talking to your AI assistant!
You can also test the agent directly in your terminal without the frontend:
# Console mode - talk to agent in terminal
uv run python agent.py consoleThis template is designed to be a foundation for you to build upon. Key areas for customization:
- Model Selection: Update Groq model configuration in
agent.py - Agent Personality: Modify the system instructions in the
Assistantclass - UI/Styling: Customize themes and components in
voice-assistant-frontend/components/ - Voice Settings: Change TTS voice and speech parameters in the agent configuration
Common Issues:
- "Connection failed" - Check your LiveKit credentials and URL
- "Agent not responding" - Ensure the Python agent is running with
uv run python agent.py dev - "No audio" - Check browser microphone permissions
- Import errors - Run
uv sync --no-install-projectto reinstall dependencies
- Explore the Voice AI Guide: LiveKit Voice AI Quickstart
- Frontend Reference: Based on LiveKit Voice Assistant Frontend
- Create your free GroqCloud account: Access official API docs, the playground for experimentation, and more resources via Groq Console
- Build and customize: Fork this repo and start customizing to build out your own application
- Get support: Connect with other developers building on Groq, chat with our team, and submit feature requests on our Groq Developer Forum
- See enterprise capabilities: This template showcases production-ready AI that can handle realtime business workloads
- Discuss Your needs: Contact our team to explore how Groq can accelerate your AI initiatives
This project is licensed under the MIT License - see the LICENSE file for details.
Created by Jan Zheng using LiveKit and Groq.
Frontend based on the LiveKit Voice Assistant Frontend example.