Skip to content

Microsoft's GraphRAG + AutoGen + Ollama + Chainlit = Fully Local & Free Multi-Agent RAG Superbot

Notifications You must be signed in to change notification settings

ZubikIT/Autogen_GraphRAG_Ollama

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

42 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

GraphRAG + AutoGen + Ollama + Chainlit UI = Local Multi-Agent RAG Superbot

Graphical Abstract

This application integrates GraphRAG with AutoGen agents, powered by local LLMs from Ollama, for free and offline embedding and inference. Key highlights include:

  • Agentic-RAG: - Integrating GraphRAG's knowledge search method with an AutoGen agent via function calling.
  • Offline LLM Support: - Configuring GraphRAG (local & global search) to support local models from Ollama for inference and embedding.
  • Non-OpenAI Function Calling: - Extending AutoGen to support function calling with non-OpenAI LLMs from Ollama via Lite-LLM proxy server.
  • Interactive UI: - Deploying Chainlit UI to handle continuous conversations, multi-threading, and user input settings.

Main Interfacce Widget Settings

Useful Links 🔗

  • Full Guide: Microsoft's GraphRAG + AutoGen + Ollama + Chainlit = Fully Local & Free Multi-Agent RAG Superbot Medium.com 📚

📦 Installation and Setup Linux

Follow these steps to set up and run AutoGen GraphRAG Local with Ollama and Chainlit UI:

  1. Install LLMs:

    Visit Ollama's website for installation files.

    ollama pull hf.co/IlyaGusev/saiga_nemo_12b_gguf:Q4_K_M
    ollama pull nomic-embed-text
    ollama run llama3.2-vision
    ollama serve
  2. Create conda environment and install packages:

    conda create -n RAG_agents python=3.12
    conda activate RAG_agents
    git clone https://github.com/ZubikIT/Autogen_GraphRAG_Ollama
    cd autogen_graphRAG
    pip install -r requirements.txt
  3. Initiate GraphRAG root folder:

    mkdir -p ./input
    graphrag init --root . 
    mv ./utils/settings.yaml ./
  4. Replace 'embedding.py' and 'openai_embeddings_llm.py' in the GraphRAG package folder using files from Utils folder:

    sudo find / -name openai_embeddings_llm.py
    sudo find / -name embedding.py
  5. Create embeddings and knowledge graph:

    python -m graphrag.index --root .
  6. Start Lite-LLM proxy server:

    litellm --model ollama_chat/llama3
  7. Run app:

    chainlit run appUI.py

📦 Installation and Setup Windows

Follow these steps to set up and run AutoGen GraphRAG Local with Ollama and Chainlit UI on Windows:

  1. Install LLMs:

    Visit Ollama's website for installation files.

    ollama pull mistral
    ollama pull nomic-embed-text
    ollama pull llama3
    ollama serve
  2. Create conda environment and install packages:

    git clone https://github.com/karthik-codex/autogen_graphRAG.git
    cd autogen_graphRAG
    python -m venv venv
    ./venv/Scripts/activate
    pip install -r requirements.txt
  3. Initiate GraphRAG root folder:

    mkdir input
    python -m graphrag.index --init  --root .
    cp ./utils/settings.yaml ./
  4. Replace 'embedding.py' and 'openai_embeddings_llm.py' in the GraphRAG package folder using files from Utils folder:

    cp ./utils/openai_embeddings_llm.py .\venv\Lib\site-packages\graphrag\llm\openai\openai_embeddings_llm.py
    cp ./utils/embedding.py .\venv\Lib\site-packages\graphrag\query\llm\oai\embedding.py 
  5. Create embeddings and knowledge graph:

    python -m graphrag.index --root .
  6. Start Lite-LLM proxy server:

    litellm --model ollama_chat/llama3
  7. Run app:

    chainlit run appUI.py

About

Microsoft's GraphRAG + AutoGen + Ollama + Chainlit = Fully Local & Free Multi-Agent RAG Superbot

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%