YouTubeGPT lets you summarize and chat (Q&A) with YouTube videos. Its features include:
- provide a custom prompt for summaries ✍️ VIEW DEMO
- you can tailor the summary to your needs by providing a custom prompt or just use the default summarization
- get answers to questions about the video content ❓ VIEW DEMO
- part of the application is designed and optimized specifically for question answering tasks (Q&A)
- create your own library/knowledge base 📂
- the summaries and answers can be saved to a library accessible at a separate page!
- additionally, summaries can be automatically saved in the directory where you run the app. The summaries will be available under
<YT-channel-name>/<video-title>.md
- choose from different OpenAI models 🤖
- currently available: gpt-3.5-turbo, gpt-4 (turbo), gpt-4o (mini)
- by choosing a different model, you can summarize even longer videos and potentially get better responses
- experiment with settings ⚙️
- adjust the temperature and top P of the model
- choose UI theme 🖌️
- go to the three dots in the upper right corner, select settings and choose either light, dark or my aesthetic custom theme
No matter how you choose to run the app, you will first need to get an OpenAI API-Key. This is very straightforward and free. Have a look at their instructions to get started.
- make sure to provide an OpenAI API key (l. 43 in docker-compose.yml)
- adjust the path to save the summaries (l. 39 in docker-compose.yml)
- execute the following command:
# pull from docker hub
docker-compose up -d
# or build locally
docker-compose up --build -d
The app will be accessible in the browser under http://localhost:8501.
# pull from Docker Hub
docker pull sudoleg/yotube-gpt:latest
# or build locally
docker build --tag=sudoleg/yotube-gpt:latest .
docker run -d -p 8501:8501 \
-v $(pwd):/app/responses \
-e OPENAI_API_KEY=<your-openai-api-key> \
--name youtube-ai sudoleg/yotube-gpt:latest
ℹ️ For the best user-experience, you need to be in
Tier 1
usage tier, which requires a one-time payment of 5$. However it's worth it, since then, you'll have access to all models and higher rate limits.
I’m working on adding more features and am open to feedback and contributions. Don't hesitate to create an issue or a pull request. Also, if you are enjoying the app or find it useful, please consider giving the repository a star ⭐
This is a small side-project and it's easy to get started! If you want to contribute, here’s the gist to get your changes rolling:
- Fork & clone: Fork the repo and clone your fork to start.
- Pick an issue or suggest One: Choose an open issue to work on, or suggest a new feature or bug fix by creating an issue for discussion.
- Develop: Make your changes.
- Ensure your code is clean and documented. Test the changes at least exploratively. Make sure to test 'edge cases'.
- Commit your changes with clear, descriptive messages, using conventional commits.
- Stay updated: Keep your branch in sync with the main branch to avoid merge conflicts.
- Pull Request: Push your changes to your fork and submit a pull request (PR) to the main repository. Describe your changes and any relevant details.
- Engage: Respond to feedback on your PR to finalize your contribution.
# create and activate a virtual environment
python -m venv .venv
source .venv/bin/activate
# install requirements
pip install -r requirements.txt
# you'll need an API key
export OPENAI_API_KEY=<your-openai-api-key>
# run chromadb (necessary for chat)
docker-compose up -d chromadb
# run app
streamlit run main.py
The app will be accessible in the browser under http://localhost:8501 and the ChromaDB API under http://localhost:8000/docs.
The project is built using some amazing libraries:
- The project uses YouTube Transcript API for fetching transcripts.
- LangChain is used to create a prompt, submit it to an LLM and process it's response.
- The UI is built using Streamlit.
- ChromaDB is used as a vector store for embeddings.
This project is licensed under the MIT License - see the LICENSE file for details.