A Chrome extension that uses AI to answer questions about YouTube video content using RAG (Retrieval Augmented Generation) with Groq's Llama AI.
- 🤖 AI-powered Q&A about YouTube videos
- 📺 Automatic transcript extraction
- 💬 Chat-like interface
- 🎯 Context-aware responses using RAG
- ⚡ Lightning-fast responses with Groq API
- 🔒 Secure API key management
- Frontend: HTML, CSS, JavaScript (Chrome Extension)
- Backend: FastAPI (Python)
- AI Model: Llama 3.1 8B Instant (via Groq API)
- Libraries:
- youtube-transcript-api
- groq
- tiktoken
- python-dotenv
- uvicorn
- fastapi
- Get a free Groq API key from console.groq.com
- Python 3.8+ installed
-
Clone the repository:
git clone https://github.com/KUNDAN1334/YT_SCAN.git cd YT_SCAN -
Install Python dependencies:
pip install fastapi uvicorn youtube-transcript-api groq tiktoken python-dotenv
-
Create a
.envfile in the project root:GROQ_API_KEY=your-groq-api-key-here
-
Start the backend server:
cd backend uvicorn main:app --reload
- Open Chrome and go to
chrome://extensions/ - Enable "Developer mode"
- Click "Load unpacked"
- Select the project root directory (YT_SCAN)
- Make sure your
.envfile contains your Groq API key - Start the FastAPI backend server
- Open the Chrome extension
- Paste a YouTube URL
- Click "Load Video"
- Ask questions about the video content
YT_SCAN/
├── backend/
│ ├── main.py # FastAPI server
│ ├── transcript_handler.py # YouTube transcript extraction
│ └── llama_chat.py # Groq API integration
├── popup.html # Extension popup UI
├── popup.js # Frontend JavaScript
├── style.css # Styling
├── manifest.json # Extension manifest
├── .env # Environment variables (API keys)
├── .gitignore # Git ignore file
└── README.md
The AI model is configured in backend/llama_chat.py using Groq API:
from groq import Groq
import os
from dotenv import load_dotenv
load_dotenv()
client = Groq(api_key=os.getenv("GROQ_API_KEY"))
def generate_answer(context, question):
response = client.chat.completions.create(
model="llama-3.1-8b-instant", # Fast Llama model
messages=[{"role": "user", "content": prompt}],
max_tokens=800,
temperature=0.1,
stream=False
)
return response.choices[0].message.content.strip()POST /ask- Submit question about YouTube videoGET /- Health check
Groq's Llama 3.1 8B Instant offers:
- ⚡ Ultra-fast response times (1-3 seconds)
- 🎯 High-quality text generation
- 🧠 Excellent context understanding
- 💰 Free tier with generous limits
- 🔄 No local setup required
The extension automatically handles large video transcripts by:
- 📊 Counting tokens using tiktoken
- ✂️ Smart context truncation (keeps recent content)
- 🛡️ Preventing API limit errors
- 🎯 Maintaining answer quality
Create a .env file with:
GROQ_API_KEY=your-actual-groq-api-key-hereImportant: Never commit your .env file to version control!
-
API Key Error:
- Check your
.envfile exists and contains valid Groq API key - Verify API key at console.groq.com
- Check your
-
Token Limit Error:
- The extension automatically truncates long transcripts
- If issues persist, try shorter video segments
-
Connection Error:
- Ensure FastAPI server is running on
http://localhost:8000 - Check your internet connection for Groq API access
- Ensure FastAPI server is running on
-
Transcript Unavailable:
- Some videos may not have transcripts available
- Try videos with auto-generated captions
Groq Free Tier:
- 6,000 tokens per minute
- Automatic context truncation handles this
- Upgrade to Dev Tier for higher limits
- Fork the repository
- Create a feature branch
- Make your changes
- Submit a pull request
- ✅ API keys stored in
.envfile - ✅
.envfile excluded from git - ✅ No sensitive data in code
- ✅ Secure API communication
MIT License
- ⚡ Switched from local Mistral to Groq API
- 🚀 10x faster response times
- 🔧 Added automatic token management
- 🔒 Improved security with environment variables
- 📊 Added response time monitoring
- 🎉 Initial release with local Mistral model
- Groq for lightning-fast AI inference
- Meta for the Llama model
- YouTube Transcript API for transcript extraction
- FastAPI for the backend framework
Now commit these changes:
```bash
git add .
git commit -m "Update README: Switch from Mistral to Groq API with Llama 3.1"git push