This is the implementation of Joint Chinese Word Segmentation and Part-of-speech Tagging via Multi-channel Attention of Character N-grams at COLING 2020.
You can e-mail Yuanhe Tian at yhtian@uw.edu
, if you have any questions.
If you use or extend our work, please cite our paper at COLING 2020.
@inproceedings{tian-etal-2020-joint-chinese,
title = "Joint Chinese Word Segmentation and Part-of-speech Tagging via Multi-channel Attention of Character N-grams",
author = "Tian, Yuanhe and Song, Yan and Xia, Fei",
booktitle = "Proceedings of the 28th International Conference on Computational Linguistics",
month = dec,
year = "2020",
address = "Barcelona, Spain (Online)",
pages = "2073--2084",
}
Our code works with the following environment.
python=3.7
pytorch=1.3
Use pip install -r requirements.txt
to install the required packages.
In our paper, we use BERT (paper) and ZEN (paper) as the encoder.
For BERT, please download pre-trained BERT-Base Chinese from Google or from HuggingFace. If you download it from Google, you need to convert the model from TensorFlow version to PyTorch version.
For ZEN, you can download the pre-trained model from here.
For McASP, you can download the models we trained in our experiments from here (passcode: d3V9).
Run run_sample.sh
to train a model on the small sample data under the sample_data
directory.
We use CTB5, CTB6, CTB7, CTB9, and Universal Dependencies 2.4 (UD) in our paper.
To obtain and pre-process the data, you can go to data_preprocessing
directory and run getdata.sh
. This script will download and process the official data from UD. For CTB5 (LDC05T01), CTB6 (LDC07T36), CTB7 (LDC10T07), and CTB9 (LDC2016T13), you need to obtain the official data yourself, and then put the raw data folder under the data_preprocessing
directory.
All processed data will appear in data
directory organized by the datasets, where each of them contains the files with the same file names under the sample_data
directory.
You can find the command lines to train and test models in train.sh
and test.sh
, respectively.
Here are some important parameters:
--do_train
: train the model.--do_test
: test the model.--use_bert
: use BERT as encoder.--use_zen
: use ZEN as encoder.--bert_model
: the directory of pre-trained BERT/ZEN model.--use_attention
: use multi-channel attention.--cat_type
: the categorization strategy to be used (can be eitherfreq
orlength
).--ngram_length
: the max length of n-grams to be considered.--cat_num
: the number of channels (categories) to use (this number needs to equal tongram_length
ifcat_type
islength
).--ngram_type
: useav
,dlg
, orpmi
to construct the lexicon N.--av_threshold
: when usingav
to construct the lexicon N, n-grams whose AV score is lower than the threshold will be excluded from the lexicon N.--ngram_threshold
: n-grams whose frequency is lower than the threshold will be excluded from the lexicon N. Note that, when the threshold is set to 1, no n-gram is filtered out by its frequency. We therefore DO NOT recommend you to use 1 as the n-gram frequency threshold.--model_name
: the name of model to save.
- Regular maintenance.
You can leave comments in the Issues
section, if you want us to implement any functions.