Andreas Fürst* 1, Elisabeth Rumetshofer* 1, Johannes Lehner1, Viet Tran1, Fei Tang3, Hubert Ramsauer1, David Kreil2, Michael Kopp2, Günter Klambauer1, Angela Bitto-Nemling1, Sepp Hochreiter1 2
1 ELLIS Unit Linz and LIT AI Lab, Institute for Machine Learning, Johannes Kepler University Linz, Austria
2 Institute of Advanced Research in Artificial Intelligence (IARAI)
3 HERE Technologies
* Equal contribution
Detailed blog post on this paper at this link.
The full paper is available here.
This repository contains the implemenation of CLOOB used to obtain the results reported in the paper. The implementation is based on OpenCLIP, an open source implementation of OpenAI's CLIP.
We provide an 'environment.yml' file to set up a conda environment with all required packages. Run the following command to clone the repository and create the environment.
# Clone repository and swtich into the directory
git clone https://github.com/ml-jku/cloob
cd cloob
# Create the environment and activate it
conda env create --file environment.yml
conda activate cloob
# Add the directory to the PYTHONPATH environment variable
export PYTHONPATH="$PYTHONPATH:$PWD/src"
For pre-training we use the two datasets supported by OpenCLIP, namely Conceptual Captions and YFCC.
OpenCLIP already provides a script to download and prepare the Conceptual Captions dataset, which contains 2.89M training images and 13k validation images.
First, download the Conceptual Captions URLs and then run the script gather_cc.py
.
python3 src/data/gather_cc.py path/to/Train_GCC-training.tsv path/to/Validation_GCC-1.1.0-Validation.tsv
We use the same subset of ~15M images from the YFCC100M dataset as CLIP. They provide a list of (line number, photo identifier, photo hash) of each image contained in this subset here.
For more information see YFCC100m Subset on OpenAI's github.
In the paper we report results on several downstream tasks. Except for ImageNet we provide links to already pre-processed versions (where necessary) of the respective test set.
Dataset | Description | Official | Processed |
---|---|---|---|
Birdsnap | This dataset contains images of North American bird species, however our dataset is smaller than reported in CLIP as some samples are no longer available. |
Link | Link |
Country211 | This dataset was published in CLIP and is a small subset of the YFCC100m dataset. It consists of photos that can be assigned to 211 countries via GPS coordinates. For each country 200 photos are sampled for the training set and 100 for testing. |
Link | Link |
Flowers102 | Images of 102 flower categories commonly occuring in the United Kingdom were collected. Several classes are very similar and there is a large variation in scale, pose and lighting. |
Link | Link |
GTSRB | This dataset was released for a challenge held at the IJCNN 2011. The dataset contains images of german traffic signs from more than 40 classes. |
Link | Link |
Stanford Cars | This dataset contains images of 196 car models at the level of make, model and year (e.g. Tesla Model S Sedan 2012). |
Link | Link |
UCF101 | The dataset has been created by extracting the middle frame from each video. | Link | Link |
ImageNet | This dataset spans 1000 object classes and contains 1,281,167 training images, 50,000 validation images and 100,000 test images. |
Link | - |
ImageNet v2 | The ImageNetV2 dataset contains new test data for the ImageNet benchmark. | Link | - |
In the following there is an example command for pretraining on CC with an effective batch size of 512 when used on 4 GPUs.
python -u src/training/main.py \
--train-data="<dataset-dir>/conceptual_captions/Train-GCC-training_output.csv" \
--val-data="<dataset-dir>/conceptual_captions/Validation_GCC-1.1.0-Validation_output.csv" \
--path-data="<dataset-dir>/conceptual_captions" \
--imagenet-val="<dataset-dir>/imagenet/val" \
--warmup 20000 \
--batch-size=128 \
--lr=1e-3 \
--wd=0.1 \
--lr-scheduler="cosine-restarts" \
--restart-cycles=10 \
--epochs=70 \
--method="cloob" \
--init-inv-tau=30 \
--scale-hopfield=8 \
--workers=8 \
--model="RN50" \
--dist-url="tcp://127.0.0.1:6100" \
--batch-size-eval=512
We provide a Jupyter notebook to perform zeroshot evaluation with a trained model.
MIT LICENSE