Quickstart | Installation | Documentation | Code of Conduct
Plexiglass is a toolkit for detecting and protecting against vulnerabilities in Large Language Models (LLMs).
It is a simple command line interface (CLI) tool which allows users to quickly test LLMs against adversarial attacks such as prompt injection, jailbreaking and more.
Plexiglass also allows security, bias and toxicity benchmarking of multiple LLMs by scraping latest adversarial prompts such as jailbreakchat.com and wiki_toxic. See more at modes.
Please follow this quickstart guide in the documentation.
The first experimental release is version 0.0.1
.
To download the package from PyPi:
pip install --upgrade plexiglass
Plexiglass has two modes: llm-chat
and llm-scan
.
llm-chat
allows you to converse with the LLM and measure predefined metrics, such as toxicity, from its responses. It currently supports the following metrics:
toxicity
pii_detection
llm-scan
runs benchmarks using open-source datasets to identify and assess various vulnerabilities in the LLM.
To request new features, please submit an issue
- implement adversarial prompt templates in
llm-chat
mode - security, bias and toxicity benchmarking with
llm-scan
mode - generate html report in
llm-scan
andllm-chat
modes - standalone python module
- production-ready API
Join us in #plexiglass on Discord.
Read our Code of Conduct.
Made with contrib.rocks.