[2024-10] 🚀 Check out our follow-up work in Findings of EMNLP 2024 (paper) at repo. You can also interact with the online agent here.
-
The WikiWebQuestions dataset can be found under directory
WikiWebQuestions
; -
Training data for all models published in our paper can be found under directory
training_data
; -
Prediction results for all models published in our paper can be found under directory
predicted_results
.
Two models in the paper are available on huggingface:
-
https://huggingface.co/stanford-oval/llama-7b-wikiwebquestions: This model is trained on WikiWebquestions and the Stanford Alpaca dataset. In the paper, this is the
WikiSP (ours)
model in Section 6 tables. -
https://huggingface.co/stanford-oval/llama-7b-wikiwebquestions-qald7: This model is trained on both WikiWebquestions, Qald-7, and the Stanford Alpaca dataset. In the paper, this is the model in Section 7.
To download these models, you can use:
python -c 'from huggingface_hub import snapshot_download; snapshot_download(repo_id="stanford-oval/llama-7b-wikiwebquestions-qald7", repo_type="model", local_dir="<PATH_TO_LOCAL_DIRECTORY>", local_dir_use_symlinks=False)'
Then, start the server in a separate terminal using Huggingface's text-generation-inference library. We recommend using their provided Docker image given its ease of use. Run:
docker run --gpus all --shm-size 1g -p 8700:80 -v <PATH_TO_LOCAL_DIRECTORY>:/data ghcr.io/huggingface/text-generation-inference:1.3.4 --model-id /data/ --num-shard <number-of-gpus> --max-batch-total-tokens 4096
(for instance, this would set up a connection on "http://127.0.0.1:8700/" for inference)
The JSON file names correspond to the models in our paper in the following way:
-
best
refers to the best model in our paper, i.e.,WikiSP (ours)
in Section 6 tables; -
no_mention_oracle
refers to the model namedNo mentions, trained with Oracle NED
in Section 6.2 (Table 2); -
no_mention_refined
refers to the model namedNo mentions, trained with ReFinED
in Section 6.2 (Table 2); -
original_query_format
refers to the model namedOriginal SPARQL
in Section 6.3 (Table 3).
To run evaluation on the dev set, prepare your prediction file in the same format as predicted_results/best.json
, and supply it as the first parameter
for execute_predictions("predicted_results/best.json", "WikiWebQuestions/dev.json")
in eval_predictions.py
.
Then, run python eval_predictions.py
.
If running on the test set, also change the second parameter to "WikiWebQuestions/test.json"
evaluate_dev
under eval.py
specifies how to evaluate our model on the dev set. To inference, first inject wikidata-emnlp23/WikiWebQuestions/dev.json
into
a MongoDB (in eval.py
this corresponds to webquestion_dev = client["wikidata-eval"]["dev"]
).
Then, download the fine-tuned ReFiNED model by:
pip install https://github.com/amazon-science/ReFinED/archive/refs/tags/V1.zip
mkdir -p <your_directory>
curl https://almond-static.stanford.edu/research/qald/refined-finetune/config.json -o <your_directory>/config.json
curl https://almond-static.stanford.edu/research/qald/refined-finetune/model.pt -o <your_directory>/model.pt
curl https://almond-static.stanford.edu/research/qald/refined-finetune/precomputed_entity_descriptions_emb_wikidata_33831487-300.np -o <your_directory>/precomputed_entity_descriptions_emb_wikidata_33831487-300.np
and then run do_ned_for_dev
under eval.py
to inference all entities in the dev set (change /data0/wikidata-workdir/models/refined
there to <your_directory>
).
Run evaluate_dev
to get results on the dev set.
If you have used data or code from this repository, please cite this paper:
@inproceedings{xu-etal-2023-fine,
title = "Fine-tuned {LLM}s Know More, Hallucinate Less with Few-Shot Sequence-to-Sequence Semantic Parsing over {W}ikidata",
author = "Xu, Silei and
Liu, Shicheng and
Culhane, Theo and
Pertseva, Elizaveta and
Wu, Meng-Hsi and
Semnani, Sina and
Lam, Monica",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-main.353",
pages = "5778--5791",
abstract = "While large language models (LLMs) can answer many questions correctly, they can also hallucinate and give wrong answers. Wikidata, with its over 12 billion facts, can be used to ground LLMs to improve their factuality. This paper presents WikiWebQuestions, a high-quality question answering benchmark for Wikidata. Ported over from WebQuestions for Freebase, it consists of real-world data with SPARQL annotation. This paper presents a few-shot sequence-to-sequence semantic parser for Wikidata. We modify SPARQL to use the unique domain and property names instead of their IDs. We train the parser to use either the results from an entity linker or mentions in the query. We fine-tune LLaMA by adding the few-shot training data to that used to fine-tune Alpaca. Our experimental results demonstrate the effectiveness of this methodology, establishing a strong baseline of 76{\%} and 65{\%} answer accuracy in the dev and test sets of WikiWebQuestions, respectively. By pairing our semantic parser with GPT-3, we combine verifiable results with qualified GPT-3 guesses to provide useful answers to 96{\%} of the questions in dev. We also show that our method outperforms the state-of-the-art for the QALD-7 Wikidata dataset by 3.6{\%} in F1 score.",
}