RAGify is a Retrieval-Augmented Generation (RAG) application designed to enhance the way you interact with PDF documents. Powered by Streamlit, LangChain, ChromaDB, and local LLMs via Ollama, this app allows you to query PDF files intelligently in both English/French and Arabic.
- PDF Querying: Upload PDFs and ask questions to extract insights quickly and accurately.
- Multilingual Support: Seamless handling of both English and Arabic text for querying and responses.
- Local LLMs: Ensures privacy by using local language models via Ollama—no external API required.
- Efficient Retrieval: Employs ChromaDB for fast and accurate document embeddings and retrieval.
- Streamlit UI: User-friendly interface for easy document interaction.
git clone https://github.com/ITSAIDI/RAGify.git
cd RAGify
cd Code
- Install first Ollama server in your machine.
- In a new cmd run the commands bellow to install some models :
ollama pull hf.co/nomic-ai/nomic-embed-text-v1.5-GGUF:F32
ollama pull llama3.2:3b
ollama pull llama3.1:8b
ollama pull qwen:7b
- Then in a new Conda env or venv install some python libraries with :
pip install -r requirements.txt
streamlit run main.py
- Upload a PDF file(s) via the Streamlit interface.
- Choose your query language (Arabic or other).
- Ask questions about the document.
- Get precise answers powered by the RAG pipeline.
- Streamlit: Frontend interface for user interaction.
- LangChain: Framework for building RAG pipelines.
- ChromaDB: Vector database for document embeddings and retrieval.
- Ollama LLMs: Local language model server for secure and private inference.
Contributions are welcome! Please fork the repository and submit a pull request.
Special thanks to the developers of Streamlit, LangChain, ChromaDB, and Ollama for their fantastic tools that made this app possible.