Upload, score, and visually compare multiple LLM-graded summaries simultaneously!
✍️📜🧑🏫📖💯
The popularity of large language models (LLMs) has inspired LLM developers to incorporate them into adaptive educational tools that can automatically score a summary written on a larger body of text in a variety of educational settings such as classrooms and textbooks. Interactively exploring how LLMs score different summaries can help developers understand the decisions on which the LLMs base their scores, discover unintended biases, update the LLMs to address the biases and mitigate the potential pedagogical ramifications of prematurely deploying untested LLM-powered educational technologies.
iScore is an interactive visual analytics tool for developers to upload, score, and compare multiple summaries of a source text simultaneously. iScore introduces a new workflow for comparing the language features that contribute to different LLM scores:
- First, users upload, score and can manually revise and re-score multiple source/summary pairs simultaneously.
- Then, users can visually track how scores change across revisions in the context of expert-scored LLM training data.
- Finally, users can compare model weights between words across model layers, as well as differences in scores between automatically revised summary perturbations.
Together, the views provide LLM developers with access to multiple summary comparison visualizations and several well-known LLM interpretability methods including attention attribution, input perturbation, and adversarial examples. Combining these visualizations and methods in a single visual interface broadly enables deeper analysis of LLM behavior that was previously time-consuming and difficult to perform.
This code accompanies the research paper:
iScore: Visual Analytics for Interpreting How Language Models Automatically Score Summaries
Adam Coscia, Langdon Holmes, Wesley Morris, Joon Suh Choi, Scott Crossley, Alex Endert
ACM Conference on Intelligent User Interfaces (IUI), 2024
| 📖 Paper |
🎞️ Watch the demo video for a full tutorial here: https://youtu.be/EYJX-_fQPf0
🚀 For a live demo, visit: https://adamcoscia.com/papers/iscore/demo/
🌱 You can test our visualizations on your own LLMs in just a few easy steps!
- Install Python
v3.9.x
(latest release) - Clone this repo to your computer (instructions)
git clone git@github.com:AdamCoscia/iScore.git
# use --depth if you don't want to download the whole commit history
git clone --depth 1 git@github.com:AdamCoscia/iScore.git
- A frontend vanilla HTML/CSS/JavaScript web app with D3.js and Tabulator!
- Additional details can be found in interface/README.md
Navigate to the interface folder:
cd interface
- If you are running Windows:
py -3.9 -m http.server
- If you are running MacOS / Linux:
python3.9 -m http.server
Navigate to localhost:8000. You should see iScore running in your browser :)
- A backend Python 3.9 Flask web app to run local LLM models downloaded from Hugging Face!
- Additional details can be found in server/README.md
Navigate to the server folder:
cd server
Create a virtual environment:
- If you are running Windows:
# Start a virtual environment
py -3.9 -m venv venv
# Activate the virtual environment
.\venv\Scripts\activate
- If you are running MacOS / Linux:
# Start a virtual environment
python3.9 -m venv venv
# Activate the virtual environment
source venv/bin/activate
Install dependencies:
python -m pip install -r requirements.txt
Install symspellpy v6.7.7
(instructions)
symspellpy is a Python port of SymSpell
v6.7.1
Warning for MacOS users - symspellpy has only been tested on Windows and Linux systems and is assumed to work on macOS!
Install PyTorch v2.0.x
(instructions)
PyTorch is installed separately because some systems may support CUDA, which requires a different installation process and can significantly speed up the tool.
- First, check if your GPU can support CUDA (link)
- Then, follow the instructions linked above to determine if your system can support CUDA for computation.
Then run the server:
python main.py
Led by Adam Coscia, iScore is a result of a collaboration between visualization experts in human centered computing and interaction design as well as learning engineers with expertise in natural language processing (NLP) and developing learning tools from Georgia Tech, Vanderbilt, and Georgia State.
iScore is created by Adam Coscia, Langdon Holmes, Wesley Morris, Joon Suh Choi, Scott Crossley, and Alex Endert.
To learn more about iScore, please read our research paper (published at IUI '24).
@inproceedings{Coscia:2024:iScore,
author = {Coscia, Adam and Holmes, Langdon and Morris, Wesley and Choi, Joon S. and Crossley, Scott and Endert, Alex},
title = {iScore: Visual Analytics for Interpreting How Language Models Automatically Score Summaries},
year = {2024},
isbn = {979-8-4007-0508-3/24/03},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi.org/10.1145/3640543.3645142},
doi = {10.1145/3640543.3645142},
booktitle = {Proceedings of the 2024 IUI Conference on Intelligent User Interfaces},
location = {Greenville, SC, USA},
series = {IUI '24}
}
The software is available under the MIT License.
If you have any questions, feel free to open an issue or contact Adam Coscia.