This repository contains the official JAX implementation of FLIP, as described in the paper Scaling Language-Image Pre-training via Masking.
@inproceedings{li2022scaling,
title={Scaling Language-Image Pre-training via Masking},
author={Li, Yanghao and Fan, Haoqi and Hu, Ronghang and Feichtenhofer, Christoph and He, Kaiming},
booktitle={CVPR},
year={2023}
}
- The implementation is based on JAX and the models are trained on TPUs.
- FLIP models are trained on LAION datasets including LAION-400M and LAION-2B.
- Other links
The following table provides zero-shot results on ImageNet-1K and links to pre-trained weights for the LAION datasets:
data | sampled | zero-shot IN-1K | model | |
---|---|---|---|---|
ViT-B/16 | LAION-400M | 12.8B | 68.0 | - |
ViT-L/16 | LAION-400M | 12.8B | 74.3 | - |
ViT-H/14 | LAION-400M | 12.8B | 75.5 | - |
ViT-L/16 | LAION-2B | 25.6B | 76.6 | download† |
ViT-H/14 | LAION-2B | 25.6B | 78.8 | download† |
† The released ViT-L/16 and ViT-H/14 models were trained on LAION datasets where faces were blurred as a legal requirement, resulting in a slight performance drop by 0.2-0.3% to achieve accuracies of 76.4% and 78.5%, respectively.
Please check INSTALL.md for installation instructions and data prepraration.
Our FLIP models are trained on Google Cloud TPU To set up Google Cloud TPU, please refer to the their docs for single VM setup and pod slice setup.
By default, we train ViT-B/L models using v3-256 TPUs and ViT-H models with v3-512 TPUs.
export TFDS_DATA_DIR=gs://$GCS_TFDS_BUCKET/datasets
python3 main.py \
--workdir=${workdir} \
--config=$1 \
--config.batch_size=256 \
--config.laion_path=LAION_PATH \
gcloud alpha compute tpus tpu-vm ssh $VM_NAME --zone $ZONE \
--worker=all --command "
export TFDS_DATA_DIR=gs://$GCS_TFDS_BUCKET/datasets &&
python3 main.py --workdir=$WORKDIR --config=configs/cfg_flip_large.py --config.laion_path=LAION_PATH
For unmasked tuning, we use the same configs except the following parameters:
python3 main.py --workdir=$WORKDIR --config=configs/cfg_flip_large.py \
--config.laion_path=LAION_PATH \
--config.model.model_img.mask_ratio=0.0 --config.learning_rate=4e-8
--config.num_epochs=100 --config.warmup_epochs=20 \
--config.pretrain_dir=${PRETRAIN} \
To avoid out of memory issue, we may need to optionally turn on activation checkpointing by config.model.model_img.transformer.remat_policy=actcp
and reduce batch size config.batch_size
.
To evaluation the pre-trained models for zero-shot ImageNet-1K:
export TFDS_DATA_DIR=gs://$GCS_TFDS_BUCKET/datasets
python3 main.py \
--workdir=${workdir} \
--config=configs/cfg_flip_large.py \
--config.pretrain_dir=$PRETRAIN_MODEL_PATH \
--config.eval_only=True \
This project is under the CC-BY-NC 4.0 license. See LICENSE for details.