Here is the implementation of our IJCAI-2021 Learning Class-Transductive Intent Representations for Zero-shot Intent Detection.
The Appendix mentioned in this paper are shown in Appendix.pdf
This repository contains code modified from here for CapsNet+CTIR, here for ZSDNN+CTIR and here for +LOF+CTIR, many thanks!
cd data/nlu_data
You can download the Glove embedding file we used from here.
cd capsnet-CTIR
python main.py SNIP ZSID
python main.py SNIP GZSID
python main.py CLINC ZSID
python main.py CLINC GZSID
cd zerodnn-CTIR
python zerodnn_main.py SNIP ZSID
python zerodnn_main.py SNIP GZSID
python zerodnn_main.py CLINC ZSID
python zerodnn_main.py CLINC GZSID
The main idea of two-stage method for GZSID is to first determine whether an utterance belongs to unseen intents (i.e., Y_seen ), and then classify it into a specific intent class. This method bypasses the need to classify an input sentence among all the seen and unseen intents, thereby alleviating the domain shift problem. To verify the performance of integrating CTIR into the two-stage method, we design a new two-stage pipeline (+LOF+CTIR
). In Phase 1, a test utterance is classified into one of the classes from Y_seen ∪ { y_unseen } using the density-based algorithm LOF(LMCL) (refer here). In Phase2, we perform ZSID for the utterances that have been classified into y_unseen , using the CTIR methods such as CapsNet+CTIR
, ZSDNN+CTIR
.
If you found this code is useful, please cite the following paper:
@article{si2020learning,
title={Learning Disentangled Intent Representations for Zero-shot Intent Detection},
author={Si, Qingyi and Liu, Yuanxin and Fu, Peng and Li, Jiangnan and Lin, Zheng and Wang, Weiping},
journal={arXiv preprint arXiv:2012.01721},
year={2020}
}