Implementation of Enriching Pre-trained Language Model Entity Information for Relation Classification.
- python 3.6.9
- pytorch 1.5.1+cu92
- transformers 2.11.0
- tqdm 4.40.1
- TITAN Xp
- CUDA Version 9.0.176
- Download the pre-trained BERT model and put it into the
resource
folder. - Run the following the commands to start the program.
python run.py \
--batch_size=16 \
--max_len=128 \
--lr=2e-5 \
--epoch=5 \
--dropout=0.1
More details can be seen by python run.py -h
.
- You can use the official scorer to check the final predicted result (in the
eval
folder).
perl semeval2010_task8_scorer-v1.2.pl proposed_answer.txt predicted_result.txt >> result.txt
The result of my version and that in paper are present as follows:
paper | my version |
---|---|
0.8925 | 0.8906 |
The training log can be seen in train.log
and the official evaluation results is available in result.txt
.
Note:
- Some settings may be different from those mentioned in the paper.
- No validation set used during training.