jennhu/metalinguistic-prompting: Materials for "Prompting is not a substitute for probability measurements in large language models" (EMNLP 2023) #684
Labels
Algorithms
Sorting, Learning or Classifying. All algorithms go here.
Code-Interpreter
OpenAI Code-Interpreter
dataset
public datasets and embeddings
llm-evaluation
Evaluating Large Language Models performance and behavior through human-written evaluation sets
New-Label
Choose this option if the existing labels are insufficient to describe the content accurately
Papers
Research papers
Research
personal research notes for a topic
Title
jennhu/metalinguistic-prompting: Materials for "Prompting is not a substitute for probability measurements in large language models" (EMNLP 2023)
Description
"Prompting is not a substitute for probability measurements in large language models
This repository contains materials for the EMNLP 2023 paper "Prompting is not a substitute for probability measurements in large language models" (Hu & Levy, 2023). The preprint is available on arXiv.
If you find the code or data useful in your research, please use the following citation:
@inproceedings{hu_prompting_2023,
title = {Prompting is not a substitute for probability measurements in large language models},
author = {Hu, Jennifer and Levy, Roger},
year = {2023},
booktitle = {Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing},
url = https://arxiv.org/abs/2305.13264
}
Evaluation materials
Evaluation datasets can be found in the datasets folder. Please refer to the README in that folder for more details on how the stimuli were assembled and formatted.
Evaluation scripts
The scripts folder contains scripts for running the experiments. There are separate scripts for models accessed through Huggingface (*hf.sh) and the OpenAI API (*openai.sh).
For example, to evaluate flan-t5-small on the SyntaxGym dataset of Experiment 3b, run the following command from the root of this directory:
Please note that to run the OpenAI models, you will need to save your OpenAI API key to a file named key.txt in the root of this directory. For security reasons, do not commit this file (it is ignored in .gitignore).
Results and analyses
The results from the paper can be accessed by extracting the results.zip file. This will create a folder called results, which is organized by experiment:
A few notes about the results:
The results from the direct evaluation method are identical across Experiments 3a and 3b (see paper for details).
The figures from our paper can be reproduced using the analysis.ipynb notebook.
URL
https://github.com/jennhu/metalinguistic-prompting
Suggested labels
{'label-name': 'EMNLP-2023', 'label-description': "Materials and information related to the EMNLP 2023 paper 'Prompting is not a substitute for probability measurements in large language models'", 'confidence': 56.95}
The text was updated successfully, but these errors were encountered: