Official Implementation for the intelligibility protocol, PXP.
We have very minimal dependencies, and you can install them using the following command:
pip install -r requirements.txt
You might want to create a virtual environment (or use conda) to avoid conflicts with your system packages. We use python3.9.18 for all experiments.
You will also have to create a results folder,
mkdir results
Finally, you can place your API keys in a .env
file in the root directory, a template is in .env.template
.
For the RAD task, please write to Prof. Sidong Liu [email].
For the DRUG task, please write to Shreyas V [email].
Please mention "[INTERACT]" in the subject.
You can then use src/preprocess.py
to generate the data in the correct format, for the experiments.
This will also summarize the data, using the summarize
function from src/utils.py
.
To reproduce our RAD results, you can run the following command:
python src/interact.py --num_iter=5 --machine="claude-3-5-sonnet-20240620"
To reproduce our DRUG results, you can run the following command:
python src/interact.py --num_iter=5 --machine="claude-3-5-sonnet-20240620" --task=DRUG --human_type=static --eval_at_start
This will output the counts of one-way and two-way intelligible sessions, create a tags.txt
file of the actual tags exchanged between the two agents and also save the D (data.pkl
), M (messages.pkl
) and C (context.pkl
) (from Procedure 1 in the paper) to the results/
folder.
To reproduce the trend in Figure 3 from the paper, we ran the above command 5 times and manually extracted how many one-way intelligible sessions (upto an interaction limit) were generated per agent.
Reproducing the DRUG-Human results requires an expert and so the outcome may be stochastic, but an experiment can be launched using:
python src/interact.py --num_iter=5 --machine="claude-3-sonnet-20240229" --task=DRUG --human_type=real-time
Please run python src/interact.py --help
to see all the parameters that can be customized, we support several LLMs,
and our program can ideally run all LLMs supported by litellm.
In general the code allows for interaction between both static and real-time human feedback and an LLM (interfaced by the XMachine
).
To use the approach with custom data,
- you can use some form of static human feedback (like RAD), stored in data as a CSV,
- as with the DRUG task, one can create an analogous real-time feedback system, using the command line and a real expert human for feedback.
- DRUG can also be run in static mode by passing
--human_type=static
Here, we precisely describe how to use the code for a different task, say MATS (Materials Science).
- Decide the type of feedback you have access to, static (CSV with some predictions and explanations) or real-time (human expert)
- If it is static then you would need to add the data to the
data/
folder. - Now, depeding on the type of feedback, you should implement a
MATSAgent
class insrc/agents.py
which should inherit fromAgent
, and borrow code fromRADAgent
(if static) andDRUGAgent
(if real-time). - Following this, implement
MATSMachine
andMATSHuman
classes in the same file. - With this, you need to change the
create_agent
insrc/agent.py
to also be compatible with the new task. - Finally, you have to implement the
MATS
class insrc/tasks.py
which should inherit fromTask
and borrow code fromRAD
andDRUG
appropriately. - Now, you can run the code using the following command: (add this task to the choices for the
--task
argument)
python src/interact.py --num_iter=5 --machine="claude-3-sonnet-20240229" --task=MATS
This is an example interaction (from the RAD task) generated by using the PXP protocol and our implementation (As explained in the paper, this is a special case of the protocol such that the human-agent can never revise it's internal model).
Please raise an issue if you have any questions or need help with the code.
@misc{srinivasan2024implementationapplicationintelligibilityprotocol,
title={Implementation and Application of an Intelligibility Protocol for Interaction with an LLM},
author={Ashwin Srinivasan and Karan Bania and Shreyas V and Harshvardhan Mestha and Sidong Liu},
year={2024},
eprint={2410.20600},
archivePrefix={arXiv},
primaryClass={cs.AI},
url={https://arxiv.org/abs/2410.20600},
}