Skip to content

CharlieFranzel/sage

Β 
Β 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

Logo

Sage: Chat with any codebase

Discord X (formerly Twitter) Follow GitHub Repo stars GitHub License

screenshot Our chat window, showing a conversation with the Transformers library. πŸš€

Getting started

Installation

Using pipx (recommended) Make sure pipx is installed on your system (see instructions), then run:
pipx install git+https://github.com/Storia-AI/sage.git@main
Using venv and pip Alternatively, you can manually create a virtual environment and install Code Sage via pip:
python -m venv sage-venv
source sage-venv/bin/activate
pip install git+https://github.com/Storia-AI/sage.git@main

Prerequisites

sage performs two steps:

  1. Indexes your codebase (requiring an embdder and a vector store)
  2. Enables chatting via LLM + RAG (requiring access to an LLM)
πŸ’» Running locally (lower quality)
  1. To index the codebase locally, we use the open-source project Marqo, which is both an embedder and a vector store. To bring up a Marqo instance:

    docker rm -f marqo
    docker pull marqoai/marqo:latest
    docker run --name marqo -it -p 8882:8882 marqoai/marqo:latest
    

    This will open a persistent Marqo console window. This should take around 2-3 minutes on a fresh install.

  2. To chat with an LLM locally, we use Ollama:

    • Head over to ollama.com to download the appropriate binary for your machine.
    • Open a new terminal window
    • Pull the desired model, e.g. ollama pull llama3.1.
☁️ Using external providers (higher quality)
  1. For embeddings, we support OpenAI and Voyage. According to our experiments, OpenAI is better quality. Their batch API is also faster, with more generous rate limits. Export the API key of the desired provider:

    export OPENAI_API_KEY=... # or
    export VOYAGE_API_KEY=...
    
  2. We use Pinecone for the vector store, so you will need an API key:

    export PINECONE_API_KEY=...
    

    If you want to reuse an existing Pinecone index, specify it. Otherwise we'll create a new one called sage.

    export PINECONE_INDEX_NAME=...
    
  3. For reranking, we support NVIDIA, Voyage, Cohere, and Jina.

    • According to our experiments, NVIDIA performs best. To get an API key, follow these instructions. Note that NVIDIA's API keys are model-specific. We recommend using nvidia/nv-rerankqa-mistral-4b-v3.
    • Export the API key of the desired provider:
    export NVIDIA_API_KEY=...  # or
    export VOYAGE_API_KEY=...  # or
    export COHERE_API_KEY=...  # or
    export JINA_API_KEY=...
    
  4. For chatting with an LLM, we support OpenAI and Anthropic. For the latter, set an additional API key:

    export ANTHROPIC_API_KEY=...
    

For easier configuration, adapt the entries within the sample .sage-env (change the API keys names based on your desired setup) and run:

source .sage-env

Optional

If you are planning on indexing GitHub issues in addition to the codebase, you will need a GitHub token:

export GITHUB_TOKEN=...

Running it

  1. Select your desired repository:

    export GITHUB_REPO=huggingface/transformers
    
  2. Index the repository. This might take a few minutes, depending on its size.

    sage-index $GITHUB_REPO
    

    To use external providers instead of running locally, set --mode=remote.

  3. Chat with the repository, once it's indexed:

    sage-chat $GITHUB_REPO
    

    To use external providers instead of running locally, set --mode=remote.

Notes:

  • To get a public URL for your chat app, set --share=true.
  • You can overwrite the default settings (e.g. desired embedding model or LLM) via command line flags. Run sage-index --help or sage-chat --help for a full list.

Additional features

πŸ”’ Working with private repositories

To index and chat with a private repository, simply set the GITHUB_TOKEN environment variable. To obtain this token, go to github.com > click on your profile icon > Settings > Developer settings > Personal access tokens. You can either make a fine-grained token for the desired repository, or a classic token.

export GITHUB_TOKEN=...
πŸ› οΈ Control which files get indexed

You can specify an inclusion or exclusion file in the following format:

# This is a comment
ext:.my-ext-1
ext:.my-ext-2
ext:.my-ext-3
dir:my-dir-1
dir:my-dir-2
dir:my-dir-3
file:my-file-1.md
file:my-file-2.py
file:my-file-3.cpp

where:

  • ext specifies a file extension
  • dir specifies a directory. This is not a full path. For instance, if you specify dir:tests in an exclusion directory, then a file like /path/to/my/tests/file.py will be ignored.
  • file specifies a file name. This is also not a full path. For instance, if you specify file:__init__.py, then a file like /path/to/my/__init__.py will be ignored.

To specify an inclusion file (i.e. only index the specified files):

sage-index $GITHUB_REPO --include=/path/to/inclusion/file

To specify an exclusion file (i.e. index all files, except for the ones specified):

sage-index $GITHUB_REPO --exclude=/path/to/exclusion/file

By default, we use the exclusion file sample-exclude.txt.

πŸ› Index open GitHub issues

You will need a GitHub token first:

export GITHUB_TOKEN=...

To index GitHub issues without comments:

sage-index $GITHUB_REPO --index-issues

To index GitHub issues with comments:

sage-index $GITHUB_REPO --index-issues --index-issue-comments

To index GitHub issues, but not the codebase:

sage-index $GITHUB_REPO --index-issues --no-index-repo
πŸ“š Experiment with retrieval strategies

Retrieving the right files from the vector database is arguably the quality bottleneck of the system. We are actively experimenting with various retrieval strategies and documenting our findings here.

Currently, we support the following types of retrieval:

  • Vanilla RAG from a vector database (nearest neighbor between dense embeddings). This is the default.

  • Hybrid RAG that combines dense retrieval (embeddings-based) with sparse retrieval (BM25). Use --retrieval-alpha to weigh the two strategies.

    • A value of 1 means dense-only retrieval and 0 means BM25-only retrieval.
    • Note this is not available when running locally, only when using Pinecone as a vector store.
    • Contrary to Anthropic's findings, we find that BM25 is actually damaging performance on codebases, because it gives undeserved advantage to Markdown files.
  • Multi-query retrieval performs multiple query rewrites, makes a separate retrieval call for each, and takes the union of the retrieved documents. You can activate it by passing --multi-query-retrieval. This can be combined with both vanilla and hybrid RAG.

    • We find that on our benchmark this only marginally improves retrieval quality (from 0.44 to 0.46 R-precision) while being significantly slower and more expensive due to LLM calls. But your mileage may vary.
  • LLM-only retrieval completely circumvents indexing the codebase. We simply enumerate all file paths and pass them to an LLM together with the user query. We ask the LLM which files are likely to be relevant for the user query, solely based on their filenames. You can activate it by passing --llm-retriever.

    • We find that on our benchmark the performance is comparable with vector database solutions (R-precision is 0.44 for both). This is quite remarkable, since we've saved so much effort by not indexing the codebase. However, we are reluctant to claim that these findings generalize, for the following reasons:
      • Our (artificial) dataset occasionally contains explicit path names in the query, making it trivial for the LLM. Sample query: "Alice is managing a series of machine learning experiments. Please explain in detail how main in examples/pytorch/image-pretraining/run_mim.py allows her to organize the outputs of each experiment in separate directories."
      • Our benchmark focuses on the Transformers library, which is well-maintained and the file paths are often meaningful. This might not be the case for all codebases.

Why chat with a codebase?

Sometimes you just want to learn how a codebase works and how to integrate it, without spending hours sifting through the code itself.

sage is like an open-source GitHub Copilot with the most up-to-date information about your repo.

Features:

  • Dead-simple set-up. Run two scripts and you have a functional chat interface for your code. That's really it.
  • Heavily documented answers. Every response shows where in the code the context for the answer was pulled from. Let's build trust in the AI.
  • Runs locally or on the cloud.
  • Plug-and-play. Want to improve the algorithms powering the code understanding/generation? We've made every component of the pipeline easily swappable. Google-grade engineering standards allow you to customize to your heart's content.

Changelog

  • 2024-09-16: Renamed repo2vec to sage.
  • 2024-09-03: Support for indexing GitHub issues.
  • 2024-08-30: Support for running everything locally (Marqo for embeddings, Ollama for LLMs).

Want your repository hosted?

We're working to make all code on the internet searchable and understandable for devs. You can check out our early product, Code Sage. We pre-indexed a slew of OSS repos, and you can index your desired ones by simply pasting a GitHub URL.

If you're the maintainer of an OSS repo and would like a dedicated page on Code Sage (e.g. sage.storia.ai/your-repo), then send us a message at founders@storia.ai. We'll do it for free!

Extensions & Contributions

We built the code purposefully modular so that you can plug in your desired embeddings, LLM and vector stores providers by simply implementing the relevant abstract classes.

Feel free to send feature requests to founders@storia.ai or make a pull request!

About

Chat with any codebase with 2 commands

Resources

License

Code of conduct

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 79.4%
  • Jupyter Notebook 18.5%
  • TypeScript 2.1%