Skip to content
/ Ovis Public

A novel Multimodal Large Language Model (MLLM) architecture, designed to structurally align visual and textual embeddings.

License

Notifications You must be signed in to change notification settings

AIDC-AI/Ovis

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

32 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Ovis: Structural Embedding Alignment for Multimodal Large Language Model

Ovis (Open VISion) is a novel Multimodal Large Language Model (MLLM) architecture, designed to structurally align visual and textual embeddings. For a comprehensive introduction, please refer to the Ovis paper.

Ovis Illustration

Release

  • [11/26] 🔥 Announcing Ovis1.6-Gemma2-27B!
  • [11/04] 🔥 Announcing quantized versions of Ovis1.6: Ovis1.6-Gemma2-9B-GPTQ-Int4 and Ovis1.6-Llama3.2-3B-GPTQ-Int4!
  • [10/22] 🔥 Announcing Ovis1.6-Llama3.2-3B (Model, Demo)!
  • [09/19] 🔥 Announcing Ovis1.6-Gemma2-9B (Model, Demo)! This latest release further enhances high-resolution image processing, is trained on a larger, more diverse, and higher-quality dataset, and refines the training process with DPO training following instruction-tuning.
  • [07/24] 🔥 Introducing Ovis1.5, featuring improved high-resolution image processing and optimized training data for enhanced performance.
  • [06/14] 🔥 Launch of Ovis1.0, the inaugural version of the Ovis model.

Contents

Install

Ovis has been tested with Python 3.10, Torch 2.4.0, Transformers 4.46.2, and DeepSpeed 0.15.4. For a comprehensive list of package dependencies, please consult the requirements.txt file. Before finetuning or inference, please install Ovis as follows.

git clone git@github.com:AIDC-AI/Ovis.git
conda create -n ovis python=3.10 -y
conda activate ovis
cd Ovis
pip install -r requirements.txt
pip install -e .

Model

Ovis can be instantiated with popular LLMs. We provide the following Ovis MLLMs:

Ovis MLLMs ViT LLM Model Weights Demo
Ovis1.6-Gemma2-27B Siglip-400M Gemma2-27B-It Huggingface -
Ovis1.6-Gemma2-9B Siglip-400M Gemma2-9B-It Huggingface Space
Ovis1.6-Llama3.2-3B Siglip-400M Llama-3.2-3B-Instruct Huggingface Space

Performance

With 29B parameters, Ovis1.6-Gemma2-27B achieves exceptional performance in the OpenCompass benchmark, ranking among the top-tier open-source MLLMs.

performance-Ovis1_6-Gemma2-27B

With just 10B parameters, Ovis1.6-Gemma2-9B leads the OpenCompass benchmark among open-source MLLMs within 30B parameters.

performance-Ovis1_6-Gemma2-9B

Ovis1.6-Llama3.2-3B leads the OpenCompass benchmark among open-source MLLMs under 4B parameters, even surpassing Llama-3.2-11B-Vision-Instruct.

performance-Ovis1_6-Llama3_2-3B

Finetune

Finetuning Ovis1.6-Gemma2-9B is supported in ms-swift.

Inference

We provide an inference wrapper in ovis/serve/runner.py, which can be used as:

from PIL import Image
from ovis.serve.runner import RunnerArguments, OvisRunner
image = Image.open('IMAGE_PATH')
text = 'PROMPT'
runner_args = RunnerArguments(model_path='MODEL_PATH')
runner = OvisRunner(runner_args)
generation = runner.run([image, text])

Based on Gradio, Ovis can also be accessed via a web user interface:

python ovis/serve/server.py --model_path MODEL_PATH --port PORT

Quantization

We quantized Ovis1.6 using AutoGPTQ. For detailed information on running and creating your own quantized version, please refer to the respective Huggingface model cards: Ovis1.6-Gemma2-9B-GPTQ-Int4 and Ovis1.6-Llama3.2-3B-GPTQ-Int4. Quantized Ovis1.6 maintains performance comparable to its non-quantized counterpart while requiring less GPU memory:

  • Benchmark performance: performance-Ovis1_6-Gemma2-9B-GPTQ-Int4 performance-Ovis1_6-Llama3_2-3B-GPTQ-Int4

  • GPU memory usage (max_partition=9): performance-Ovis1_6-VRAM-Comparison

Citation

If you find Ovis useful, please cite the paper

@article{lu2024ovis,
  title={Ovis: Structural Embedding Alignment for Multimodal Large Language Model}, 
  author={Shiyin Lu and Yang Li and Qing-Guo Chen and Zhao Xu and Weihua Luo and Kaifu Zhang and Han-Jia Ye},
  year={2024},
  journal={arXiv:2405.20797}
}

Team

This work is a collaborative effort by the MarcoVL team. We would also like to provide links to the following MLLM papers from our team:

License

This project is licensed under the Apache License, Version 2.0 (SPDX-License-Identifier: Apache-2.0).

Disclaimer

We used compliance-checking algorithms during the training process, to ensure the compliance of the trained model to the best of our ability. Due to the complexity of the data and the diversity of language model usage scenarios, we cannot guarantee that the model is completely free of copyright issues or improper content. If you believe anything infringes on your rights or generates improper content, please contact us, and we will promptly address the matter.