Skip to content

Latest commit

Β 

History

History
81 lines (56 loc) Β· 3.85 KB

README.md

File metadata and controls

81 lines (56 loc) Β· 3.85 KB

LangKit

LangKit graphic

LangKit is an open-source text metrics toolkit for monitoring language models. It offers an array of methods for extracting relevant signals from the input and/or output text, which are compatible with the open-source data logging library whylogs.

πŸ’‘ Want to experience LangKit? Go to this notebook!

Table of Contents πŸ“–

Motivation 🎯

Productionizing language models, including LLMs, comes with a range of risks due to the infinite amount of input combinations, which can elicit an infinite amount of outputs. The unstructured nature of text poses a challenge in the ML observability space - a challenge worth solving, since the lack of visibility on the model's behavior can have serious consequences.

Features πŸ› οΈ

The out of the box metrics include:

  • Text Quality
    • readability score
    • complexity and grade scores
  • Text Relevance
    • Similarity scores between prompt/responses
    • Similarity scores against user-defined themes
  • Security and Privacy
    • patterns - count of strings matching a user-defined regex pattern group
    • jailbreaks - similarity scores with respect to known jailbreak attempts
    • prompt injection - similarity scores with respect to known prompt injection attacks
    • hallucinations - consistency check between responses
    • refusals - similarity scores with respect to known LLM refusal of service responses
  • Sentiment and Toxicity
    • sentiment analysis
    • toxicity analysis

Installation πŸ’»

To install LangKit, use the Python Package Index (PyPI) as follows:

pip install langkit[all]

Usage πŸš€

LangKit modules contain UDFs that automatically wire into the collection of UDFs on String features provided by whylogs by default. All we have to do is import the LangKit modules and then instantiate a custom schema as shown in the example below.

import whylogs as why
from langkit import llm_metrics

results = why.log({"prompt": "Hello!", "response": "World!"}, schema=llm_metrics.init())

The code above will produce a set of metrics comprised of the default whylogs metrics for text features and all the metrics defined in the imported modules. This profile can be visualized and monitored in the WhyLabs platform or they can be further analyzed by the user on their own accord.

More examples are available here.

Modules πŸ“¦

You can have more information about the different modules and their metrics here.

Benchmarks

AWS Instance Type Metric Module Throughput
c5.xlarge Light metrics 2335 chats/sec
LLM metrics 8.2 chats/sec
All metrics 0.28 chats/sec
g4dn.xlarge Light metrics 2492 chats/sec
LLM metrics 23.3 chats/sec
All metrics 1.79 chats/sec

Frequently Asked Questions

You can check some frequently asked questions on our FAQs section