Data and code for paper titled Generalized Category Discovery with Large Language Models in the Loop (ACL 2024 Findings paper)
Generalized Category Discovery (GCD) is a crucial task that aims to recognize both known and novel categories from a set of unlabeled data by utilizing a few labeled data with only known categories. Due to the lack of supervision and category information, current methods usually perform poorly on novel categories and struggle to reveal semantic meanings of the discovered clusters, which limits their applications in the real world. In this paper, we propose Loop, an end-to-end active-learning framework that introduces LLMs into the training loop, which can boost model performance and generate category names without relying on any human efforts.
We performed experiments on three public datasets: clinc, banking and stackoverflow, which have been included in our repository in the data folder ' ./data '.
An overview of our model is shown in the figure.
- python==3.8
- pytorch==1.11.0
- transformers==4.15.0
- openai==0.28.0
- scipy==1.9.3
- numpy==1.23.5
- scikit-learn==1.2.0
- faiss-gpu==1.7.2
Pre-training, training and testing our model through the bash scripts:
sh run.sh
You can also add or change parameters in run.sh (More parameters are listed in init_parameter.py)
It should be noted that the experimental results may be different because of the randomness of clustering when testing even though we fixed the random seeds.Some code references the following repositories:
If our paper or code is helpful to you, please consider citing our paper:
@article{an2023generalized,
title={Generalized Category Discovery with Large Language Models in the Loop},
author={An, Wenbin and Shi, Wenkai and Tian, Feng and Lin, Haonan and Wang, QianYing and Wu, Yaqiang and Cai, Mingxiang and Wang, Luyan and Chen, Yan and Zhu, Haiping and others},
journal={arXiv preprint arXiv:2312.10897},
year={2023}
}