Watch the video tutorial here Read the blog post using Mistral here
This repository contains an example project for building a private Retrieval-Augmented Generation (RAG) application using Llama3.2, Ollama, and PostgreSQL. It demonstrates how to set up a RAG pipeline that does not rely on external API calls, ensuring that sensitive data remains within your infrastructure.
- Docker
- Python, psycopg
- Ollama
- PostgreSQL, pgai
-
Create a network through which the Ollama and PostgreSQL containers will interact:
docker network create local-rag
-
Ollama docker container: (Note:
--network
tag to make sure that the container runs on the network defined)docker run -d --network local-rag -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama
- Llama3.2:
docker exec -it ollama ollama pull llama3.2
- Mistral:
docker exec -it ollama ollama pull mistral
- Nomic Embed v1.5:
docker exec -it ollama ollama pull nomic-embed-text
- Llama3.2:
-
TimescaleDB:
docker run -d --network local-rag --name timescaledb -p 5432:5432 -e POSTGRES_PASSWORD=password timescale/timescaledb-ha:pg16