This repository contains the data and codes for our paper "Retrieval-augmented Multilingual Knowledge Editing".
MzsRE is located in ./data/MzsRE/
Models are located in ./model/ You can download from google drive
python run_bizsre.py --editing_method=IKE --hparams_dir=./hparams/IKE/llama2-7b-16.yaml --data_dir=./data --metrics_save_dir ./results/llama2-7b/16shot/ --backbone llama2_7b-16shot_classifier --search classifier
python evaluate.py
- Our codes are based on Bi-ZsRE, and we thank their outstanding open-source contributions.
- Our data is based on vanilla ZsRE dataset (Levy et al., 2017), Bi-ZsRE dataset Wang et al., 2023 and the portability QA pairs collect by Yao et al. (2023).
- Zero-Shot Relation Extraction via Reading Comprehension (CoNLL 2017)
- Editing Large Language Models: Problems, Methods, and Opportunities (arXiv preprint 2023)
- Cross-Lingual Knowledge Editing in Large Language Models (arXiv preprint 2023)
If you find this work is useful or use the data in your work, please consider cite our paper:
@article{wang2023retrievalaugmented,
title={Retrieval-augmented Multilingual Knowledge Editing},
author={Weixuan Wang and Barry Haddow and Alexandra Birch},
journal={arXiv preprint arXiv:2312.13040},
year={2023}
}