The goal of the first iteration is to have local application which will make API calls to LLM providers
Currently supports Ollama's Mistral and OpenAI, uses Chroma as a vector storage. Implemented basic RAG capabilities.
- clone repository
https://github.com/vykhovanets/RAG-playground.git && cd RAG-playground
- add
.env
file intoRAG-playground
folder with the following content:
# API keys
OPENAI_API_KEY=...
# COHERE_API_KEY=...
# ANTHROPIC_API_KEY=...
# HF_API_KEY=...
# Persistence
PROJECTS_DIR='./data/projects'
HISTORIES_DIR='./data/histories'
DB='./data/db'
- install dependencies
python3.12 -m venv .envs/py-12 && source .envs/py-12/bin/activate
pip install uv && uv pip install -r requirements.txt
- run app from the virtual environment
source .envs/py-12/bin/activate
streamlit run playground/main.py