Skip to content

Small local RAG system via Ollama-platform and LLaMa model

Notifications You must be signed in to change notification settings

pdumin/rag-tutorial-v2

 
 

Repository files navigation

Local RAG with Ollama models and pdf support

❗️ Forked and modified from: https://github.com/pixegami/rag-tutorial-v2

  1. Download and install ollama and pull models:

    ollama pull llama3
    ollama pull mxbai-embed-large
    
  2. Create and activate enviroment:

    python -m venv .venv
    source .venv/bin/activate
    
  3. Install dependencies:

    pip install requirements.txt
    
  4. Add pdf files to data folder

  5. Run streamlit app:

    streamlit run app.py
    

    If there is another streamlit installed you can run streamlit from specified enviroment:

    .venv/bin/python -m streamlit run app.py
    
  6. Press Load button for getting embeddings of pdf-file(s)

  7. Ask model about content

About

Small local RAG system via Ollama-platform and LLaMa model

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%