Skip to content

Latest commit

 

History

History
44 lines (35 loc) · 2.39 KB

File metadata and controls

44 lines (35 loc) · 2.39 KB

A Large-scale Attribute Dataset for Zero-shot Learning





We propose a Large-scale Attribute Dataset (LAD) which has 78,017 images of 5 super-classes, 230 classes. The image number of LAD is larger than the sum of the four most popular attribute datasets (AwA, CUB, aP/aY and SUN). 359 attributes of visual, semantic and subjective properties are defined and annotated in instance-level.
We organized an international Zero-shot Learning Competition under AI Challenger using this dataset. More than 110 teams attended the competition.
For fair comparison, we provide standard splits of classes for ZSL and splits of images for traditional supervised Learning (packaged in the data).

The links to download the paper, data, competition and baseline:

paper download

data download
from Google Drive
from BaiduYun, Password: cwju

competition link

baseline method

Citation

@inproceedings{zhao2019large,
  title={A Large-scale Attribute Dataset for Zero-shot Learning},
  author={Zhao, Bo and Fu, Yanwei and Liang, Rui and Wu, Jiahong and Wang, Yonggang and Wang, Yizhou},
  booktitle={Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops},
  year={2019}
}

Experiment Details in the Paper
We do separate experiments on each super-class. For each super-class, we do experiments for 5 times with different splits of seen/unseen classes. We provided the 5 splits in file "split_zsl.txt" (in the data package). For super-class X, e.g. F (Fruits), you can use all Label_F_xx in each Unseen_List, e.g. Unseen_List_1, as the testing unseen classes. The rest classes in super-class X are for training seen classes. The 5 Unseen_Lists are for 5 experiments, then the 5 results are averaged as the performance on super-class X.

Contact: Bo Zhao (bozhaonanjing at Gmail)