Skip to content

Latest commit

 

History

History
34 lines (19 loc) · 2.07 KB

README.md

File metadata and controls

34 lines (19 loc) · 2.07 KB

Local RAG Example

Watch the video tutorial here Read the blog post using Mistral here

This repository contains an example project for building a private Retrieval-Augmented Generation (RAG) application using Llama3.2, Ollama, and PostgreSQL. It demonstrates how to set up a RAG pipeline that does not rely on external API calls, ensuring that sensitive data remains within your infrastructure.

Prerequisites

Docker Setup

  • Create a network through which the Ollama and PostgreSQL containers will interact:

    docker network create local-rag

  • Ollama docker container: (Note: --network tag to make sure that the container runs on the network defined)

    docker run -d --network local-rag -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama

    • Llama3.2: docker exec -it ollama ollama pull llama3.2
    • Mistral: docker exec -it ollama ollama pull mistral
    • Nomic Embed v1.5: docker exec -it ollama ollama pull nomic-embed-text
  • TimescaleDB: docker run -d --network local-rag --name timescaledb -p 5432:5432 -e POSTGRES_PASSWORD=password timescale/timescaledb-ha:pg16

    • pgai: CREATE EXTENSION IF NOT EXISTS "ai" VERSION '0.4.0' CASCADE; (also installs pgvector and plpython3)