This plugin allows you to perform zero-shot prediction on your dataset for the following tasks:
- Image Classification
- Object Detection
- Instance Segmentation
- Semantic Segmentation
Given a list of label classes, which you can input either manually, separated by commas, or by uploading a text file, the plugin will perform zero-shot prediction on your dataset for the specified task and add the results to the dataset under a new field, which you can specify.
- π 2024-12-03: Added support for Apple AIMv2 Zero Shot Model (courtesy of @harpreetsahota204)
- π 2024-12-16: Added MPS and GPU support for ALIGN, AltCLIP, Apple AIMv2 (courtesy of @harpreetsahota204)
- 2024-06-22: Updated interface for Python operator execution
- 2024-05-30: Added
- support for Grounding DINO for object detection and instance segmentation
- confidence thresholding for object detection and instance segmentation
- 2024-03-06: Added support for YOLO-World for object detection and instance segmentation!
- 2024-01-10: Removing LAION CLIP models.
- 2024-01-05: Added support for EVA-CLIP, SigLIP, and DFN CLIP for image classification!
- 2023-11-28: Version 1.1.1 supports OpenCLIP for image classification!
- 2023-11-13: Version 1.1.0 supports calling operators from the Python SDK!
- 2023-10-27: Added support for MetaCLIP for image classification
- 2023-10-20: Added support for AltCLIP and Align for image classification and GroupViT for semantic segmentation
- To use YOLO-World models, you must have
"ultalytics>=8.1.42"
.
As a starting point, this plugin comes with at least one zero-shot model per task. These are:
- ALIGN
- AltCLIP
- π Apple AIMv2
- CLIP: (OpenAI)
- CLIPA
- DFN CLIP: Data Filtering Networks
- EVA-CLIP
- MetaCLIP
- SigLIP
- Owl-ViT + Segment Anything (SAM)
- YOLO-World + Segment Anything (SAM)
- Grounding DINO + Segment Anything (SAM)
Most of the models used are from the HuggingFace Transformers library, and CLIP and SAM models are from the FiftyOne Model Zoo
Noteβ For SAM you will need to have Facebook's segment-anything
library installed.
You can see the implementations for all of these models in the following files:
classification.py
detection.py
instance_segmentation.py
semantic_segmentation.py
These models are "registered" via dictionaries in each file. In semantic_segmentation.py
, for example, the dictionary is:
SEMANTIC_SEGMENTATION_MODELS = {
"CLIPSeg": {
"activator": CLIPSeg_activator,
"model": CLIPSegZeroShotModel,
"name": "CLIPSeg",
},
"GroupViT": {
"activator": GroupViT_activator,
"model": GroupViTZeroShotModel,
"name": "GroupViT",
},
}
The activator
checks the environment to see if the model is available, and the model
is a fiftyone.core.models.Model
object that is instantiated with the model name and the task β or a function that instantiates such a model. The name
is the name of the model that will be displayed in the dropdown menu in the plugin.
If you want to add your own model, you can add it to the dictionary in the corresponding file. For example, if you want to add a new semantic segmentation model, you can add it to the SEMANTIC_SEGMENTATION_MODELS
dictionary in semantic_segmentation.py
:
CLASSIFICATION_MODELS = {
"CLIPSeg": {
"activator": CLIPSeg_activator,
"model": CLIPSegZeroShotModel,
"name": "CLIPSeg",
},
"GroupViT": {
"activator": GroupViT_activator,
"model": GroupViTZeroShotModel,
"name": "GroupViT",
},
..., # other models
"My Model": {
"activator": my_model_activator,
"model": my_model,
"name": "My Model",
}
}
π‘ You need to implement the activator
and model
functions for your model. The activator
should check the environment to see if the model is available, and the model
should be a fiftyone.core.models.Model
object that is instantiated with the model name and the task.
fiftyone plugins download https://github.com/jacobmarks/zero-shot-prediction-plugin
If you want to use AltCLIP, Align, Owl-ViT, CLIPSeg, or GroupViT, you will also need to install the transformers
library:
pip install transformers
If you want to use SAM, you will also need to install the segment-anything
library:
pip install git+https://github.com/facebookresearch/segment-anything.git
If you want to use OpenCLIP, you will also need to install the open_clip
library from PyPI:
pip install open-clip-torch
Or from source:
pip install git+https://github.com/mlfoundations/open_clip.git
If you want to use YOLO-World, you will also need to install the ultralytics
library:
pip install -U ultralytics
All of the operators in this plugin can be run in delegated execution mode. This means that instead of waiting for the operator to finish, you schedule the operation to be performed separately. This is useful for long-running operations, such as performing inference on a large dataset.
Once you have pressed the Schedule
button for the operator, you will be able to see the job from the command line using FiftyOne's command line interface:
fiftyone delegated list
will show you the status of all delegated operations.
To launch a service which runs the operation, as well as any other delegated operations that have been scheduled, run:
fiftyone delegated launch
Once the operation has completed, you can view the results in the App (upon refresh).
After the operation completes, you can also clean up your list of delegated operations by running:
fiftyone delegated cleanup -s COMPLETED
- Select the task you want to perform zero-shot prediction on (image classification, object detection, instance segmentation, or semantic segmentation), and the field you want to add the results to.
- Perform zero-shot image classification on your dataset
- Perform zero-shot object detection on your dataset
- Perform zero-shot instance segmentation on your dataset
- Perform zero-shot semantic segmentation on your dataset
You can also use the compute operators from the Python SDK!
import fiftyone as fo
import fiftyone.operators as foo
import fiftyone.zoo as foz
dataset = fo.load_dataset("quickstart")
## Access the operator via its URI (plugin name + operator name)
zsc = foo.get_operator("@jacobmarks/zero_shot_prediction/zero_shot_classify")
## Run zero-shot classification on all images in the dataset, specifying the labels with the `labels` argument
zsc(dataset, labels=["cat", "dog", "bird"])
## Run zero-shot classification on all images in the dataset, specifying the labels with a text file
zsc(dataset, labels_file="/path/to/labels.txt")
## Specify the model to use, and the field to add the results to
zsc(dataset, labels=["cat", "dog", "bird"], model_name="CLIP", label_field="predictions")
## Run zero-shot detection on a view
zsd = foo.get_operator("@jacobmarks/zero_shot_prediction/zero_shot_detect")
view = dataset.take(10)
await zsd(
view,
labels=["license plate"],
model_name="OwlViT",
label_field="owlvit_license_plate",
)
All four of the task-specific zero-shot prediction operators also expose a list_models()
method, which returns a list of the available models for that task.
zsss = foo.get_operator(
"@jacobmarks/zero_shot_prediction/zero_shot_semantic_segment"
)
zsss.list_models()
## ['CLIPSeg', 'GroupViT']
Note: The zero_shot_predict
operator is not yet supported in the Python SDK.
Note: With earlier versions of FiftyOne, you may have trouble running these operator executions within a Jupyter notebook. If so, try running them in a Python script, or upgrading to the latest version of FiftyOne!