Skip to content

Recipes for learning, fine-tuning, and adapting ColPali to your multimodal RAG use cases. πŸ‘¨πŸ»β€πŸ³

License

Notifications You must be signed in to change notification settings

tonywu71/colpali-cookbooks

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

49 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

ColPali Cookbooks πŸ‘¨πŸ»β€πŸ³

GitHub arXiv Hugging Face X

[ColPali Engine] [ViDoRe Benchmark]

Introduction

With our new model ColPali, we propose to leverage VLMs to construct efficient multi-vector embeddings in the visual space for document retrieval. By feeding the ViT output patches from PaliGemma-3B to a linear projection, we create a multi-vector representation of documents. We train the model to maximize the similarity between these document embeddings and the query embeddings, following the ColBERT method.

Using ColPali removes the need for potentially complex and brittle layout recognition and OCR pipelines with a single model that can take into account both the textual and visual content (layout, charts, ...) of a document.

This repository contains notebooks for learning about the ColVision family of models, fine-tuning them for your specific use case, creating similarity maps to interpret their predictions, and more! 😍

Task Notebook Description
Interpretability ColPali: Generate your own similarity maps πŸ‘€ Generate your own similarity maps to interpret ColPali's predictions.
Fine-tuning Fine-tune ColPali πŸ› οΈ Fine-tune ColPali using LoRA and optional 4bit/8bit quantization.
Interpretability ColQwen2: Generate your own similarity maps πŸ‘€ Generate your own similarity maps to interpret ColQwen2's predictions.
RAG ColQwen2: One model for your whole RAG pipeline with adapter hot-swapping πŸ”₯ Save VRAM by using a unique VLM for your entire RAG pipeline. Works even on Colab's free T4 GPU!

Instructions

Open with Colab

The easiest way to use the notebooks is to open them from the examples directory and click on the Colab button below:

Colab

This will open the notebook in Google Colab, where you can run the code and experiment with the models.

Run locally

If you prefer to run the notebooks locally, you can clone the repository and open the notebooks in Jupyter Notebook or in your IDE.

Citation

ColPali: Efficient Document Retrieval with Vision Language Models

Authors: Manuel Faysse*, Hugues Sibille*, Tony Wu*, Bilel Omrani, Gautier Viaud, CΓ©line Hudelot, Pierre Colombo (* denotes equal contribution)

@misc{faysse2024colpaliefficientdocumentretrieval,
      title={ColPali: Efficient Document Retrieval with Vision Language Models}, 
      author={Manuel Faysse and Hugues Sibille and Tony Wu and Bilel Omrani and Gautier Viaud and CΓ©line Hudelot and Pierre Colombo},
      year={2024},
      eprint={2407.01449},
      archivePrefix={arXiv},
      primaryClass={cs.IR},
      url={https://arxiv.org/abs/2407.01449}, 
}

About

Recipes for learning, fine-tuning, and adapting ColPali to your multimodal RAG use cases. πŸ‘¨πŸ»β€πŸ³

Topics

Resources

License

Stars

Watchers

Forks