A qualitative set of tests for use in evaluating the capabilities of foundation vision models.
In 2023, multimodal vision models have become substantially more powerful; the year has been coined as the "year of multimodality." Roboflow actively explores new vision models and their capabilities, conducting qualitative tests to learn more about how different models perform.
Our tests help us answer questions like "will this model help with labeling data?" and "could this model be used in a two-stage detection system (i.e. OCR)?"
This repository contains several tests we have run on foundation vision models to understand their performance. If you are curious about exploring new vision models, this repository may give you direction, ideas, and result in novel learnings about the model with which you are working.
We test:
- Object detection
- Classification
- Document OCR
- Handwriting OCR
- Math OCR
See the sections below for reference images and prompts.
We also recommend testing visual question answering (asking general questions about an image), exploring areas such as:
- Spatial awareness: Can the model answer questions about how objects relate?
- Veracity: When asked a question about an image, is the model correct?
- Comprehensiveness: Does a model provide a comprehensive answer to your question? Are details missed?
- Consistently: Does a model consistently answer questions it can answer? Does a model say it cannot help with a question with which it has previously helped?
Given the vast array of possibilities with vision models, no set of tests, this one included, can comprehensively evaluate what a model can do; this repository is a starting point for exploration.
We would love your help in making this repository even better! Whether you want to add a new experiment or have any suggestions for improvement, feel free to open an issue or pull request.