description |
---|
List of Metrics, Descriptors and Metric Presets available in Evidently. |
How to use this page
This is a reference page. It shows all the available Metrics, Descriptors and Presets.
You can use the menu on the right to navigate the sections. We organize the Metrics by logical groups. Note that these groups do not match the Presets with a similar name. For example, there are more Data Quality Metrics than included in the DataQualityPreset
.
- Name: the name of the Metric.
- Description: plain text explanation. For Metrics, we also specify whether it applies to the whole dataset or individual columns.
- Parameters: required and optional parameters for the Metric or Preset. We also specify the defaults that apply if you do not pass a custom parameter.
Metric visualizations. Each Metric includes a default render. To see the visualization, navigate to the example notebooks and run the notebook with all Metrics or Metric Presets.
{% hint style="info" %} We are doing our best to maintain this page up to date. In case of discrepancies, check the "All metrics" notebook in examples. If you notice an error, please send us a pull request with an update! {% endhint %}
Defaults: Presets use the default parameters for each Metric. You can see them in the tables below.
Data Quality Preset
DataQualityPreset
captures column and dataset summaries. Input columns are required. Prediction and target are optional.
Composition:
DatasetSummaryMetric()
ColumnSummaryMetric()
forall
or specifiedсolumns
DatasetMissingValuesMetric()
Optional parameters:
columns
Data Drift Preset
DataDriftPreset
evaluates the data distribution drift in all individual columns, and share of drifting columns in the dataset. Input columns are required.
Composition:
DataDriftTable()
for all or specifiedcolumns
DatasetDriftMetric()
for all or specifiedcolumns
Optional parameters:
columns
stattest
cat_stattest
num_stattest
per_column_stattest
text_stattest
stattest_threshold
cat_stattest_threshold
num_stattest_threshold
per_column_stattest_threshold
text_stattest_threshold
embeddings
embeddings_drift_method
drift_share
How to set data drift parameters, embeddings drift parameters.
Target Drift Preset
TargetDriftPreset
evaluates the prediction or target drift. Target and/or prediction is required. Input features are optional.
Composition:
ColumnDriftMetric()
fortarget
and/orprediction
columnsColumnCorrelationsMetric()
fortarget
and/orprediction
columnsTargetByFeaturesTable()
for all or specifiedcolumns
ColumnValuePlot()
fortarget
and/orprediction
columns - if the task isregression
Optional parameters:
columns
stattest
cat_stattest
num_stattest
per_column_stattest
stattest_threshold
cat_stattest_threshold
num_stattest_threshold
per_column_stattest_threshold
How to set data drift parameters.
Regression Preset
RegressionPreset
evaluates the quality of a regression model. Prediction and target are required. Input features are optional.
Composition:
RegressionQualityMetric()
RegressionPredictedVsActualScatter()
RegressionPredictedVsActualPlot()
RegressionErrorPlot()
RegressionAbsPercentageErrorPlot()
RegressionErrorDistribution()
RegressionErrorNormality()
RegressionTopErrorMetric()
RegressionErrorBiasTable()
for all or specifiedcolumns
Optional parameters:
columns
Classification Preset
ClassificationPreset
evaluates the quality of a classification model. Prediction and target are required. Input features are optional.
Composition:
ClassificationQualityMetric()
ClassificationClassBalance()
ClassificationConfusionMatrix()
ClassificationQualityByClass()
ClassificationClassSeparationPlot()
- if probabilistic classificationClassificationProbDistribution()
- if probabilistic classificationClassificationRocCurve()
- if probabilistic classificationClassificationPRCurve()
- if probabilistic classificationClassificationPRTable()
- if probabilistic classificationClassificationQualityByFeatureTable()
for all or specifiedcolumns
Optional parameters:
columns
probas_threshold
Text Evals
TextEvals()
provides a simplified interface to list Descriptors
for a given text column. It it returns a summary of evaluation results.
Composition:
ColumnSummaryMetric()
for text descriptors for the specified text column:Sentiment()
SentenceCount()
OOV()
TextLength()
NonLetterCharacterPercentage()
Required parameters:
column_name
Optional parameters:
descriptors
list
RecSys (Recommender System) Preset
RecsysPreset
evaluates the quality of the recommender system. Recommendations and true relevance scores are required. For some metrics, training data and item features are required.
Composition:
PrecisionTopKMetric()
RecallTopKMetric()
FBetaTopKMetric()
MAPKMetric()
NDCGKMetric()
MRRKMetric()
HitRateKMetric()
PersonalizationMetric()
PopularityBias()
RecCasesTable()
ScoreDistribution()
DiversityMetric()
SerendipityMetric()
NoveltyMetric()
ItemBiasMetric()
(pass column as a parameter)UserBiasMetric()
(pass column as a parameter)
Required parameter:
k
Optional parameters:
min_rel_score: Optional[int]
no_feedback_users: bool
normalize_arp: bool
user_ids: Optional[List[Union[int, str]]]
display_features: Optional[List[str]]
item_features: Optional[List[str]]
user_bias_columns: Optional[List[str]]
item_bias_columns: Optional[List[str]]
Metric | Parameters |
---|---|
DatasetSummaryMetric() Dataset-level. Calculates descriptive dataset statistics, including:
|
Required: n/a Optional:
|
DatasetMissingValuesMetric() Dataset-level. Calculates the number and share of missing values in the dataset. Displays the number of missing values per column. |
Required: n/a Optional:
|
DatasetCorrelationsMetric() Dataset-level. Calculates the correlations between all columns in the dataset. Uses: Pearson, Spearman, Kendall, Cramer_V. Visualizes the heatmap. |
Required: n/a Optional: n/a |
ColumnSummaryMetric() Column-level. Calculates various descriptive statistics for numerical, categorical, text or DateTime columns, including:
Plots the distribution histogram. If DateTime is provided, also plots the distribution over time. If Target is provided, also plots the relation with Target. |
Required:column_name Optional: n/a |
ColumnMissingValuesMetric() Column-level. Calculates the number and share of missing values in the column. |
Required: n/a Optional:
|
ColumnRegExpMetric() Column-level. Calculates the number and share of the values that do not match a defined regular expression. Example use: ColumnRegExpMetric(column_name="status", reg_exp=r".*child.*") |
Required:
|
ColumnDistributionMetric() Column-level. Plots the distribution histogram and returns bin positions and values for the given column. |
Required:column_name Optional: n/a |
ColumnValuePlot() Column-level. Plots the values in time. |
Required:column_name Optional: n/a |
ColumnQuantileMetric() Column-level. Calculates the defined quantile value and plots the distribution for the given numerical column. Example use: ColumnQuantileMetric(column_name="name", quantile=0.75) |
Required:
n/a |
ColumnCorrelationsMetric() Column-level. Calculates the correlations between the defined column and all the other columns in the dataset. |
Required:column_name Optional: n/a |
ColumnValueListMetric() Column-level. Calculates the number of values in the list / out of the list / not found in a given column. The value list should be specified. Example use: ColumnValueListMetric(column_name="city", values=["London", "Paris"]) |
Required:
n/a |
ColumnValueRangeMetric() Column-level. Calculates the number and share of values in the specified range / out of range in a given column. Plots the distributions. Example use: ColumnValueRangeMetric(column_name="age", left=10, right=20) |
Required:
|
ConflictPredictionMetric() Dataset-level. Calculates the number of instances where the model returns a different output for an identical input. Can be a signal of low-quality model or data errors. |
Required: n/a Optional: n/a |
ConflictTargetMetric() Dataset-level. Calculates the number of instances where there is a different target value or label for an identical input. Can be a signal of a labeling or data error. |
Required: n/a Optional: n/a |
Defaults for Missing Values. The metrics that calculate the number or share of missing values detect four types of missing values by default: Pandas nulls (None, NAN, etc.), "" (empty string), Numpy "-inf" value, Numpy "inf" value. You can also pass custom missing values as a parameter and specify if you want to replace the default list. Example:
DatasetMissingValuesMetric(missing_values=["", 0, "n/a", -9999, None], replace=True)
Text Evals only apply to text columns. To compute a Descriptor for a single text column, use a TextEvals
Preset. Read docs.
You can also explicitly specify the Evidently Metric (e.g., ColumnSummaryMetric
) to visualize the descriptor, or pick a Test (e.g., TestColumnValueMin
) to run validations.
Check for regular expression matches.
Descriptor | Parameters |
---|---|
RegExp()
RegExp(reg_exp=r"^I") |
Required:reg_exp Optional:
|
BeginsWith()
BeginsWith(prefix="How") |
Required:prefix Optional:
|
EndsWith()
EndsWith(suffix="Thank you.") |
Required:suffix Optional:
|
Contains()
Contains(items=["medical leave"]) |
Required: items: List[str] Optional:
|
DoesNotContain()
DoesNotContain(items=["as a large language model"] |
Required: items: List[str] Optional:
|
IncludesWords()
IncludesWords(words_list=['booking', 'hotel', 'flight'] |
Required: words_list: List[str] Optional:
|
ExcludesWords()
ExcludesWords(words_list=['buy', 'sell', 'bet'] |
Required: words_list: List[str] Optional:
|
ItemMatch()
ItemMatch(with_column="expected") |
Required: with_column: str Optional:
|
ItemNoMatch()
ItemMatch(with_column="forbidden") |
Required: with_column: str Optional:
|
WordMatch()
WordMatch(with_column="expected") |
Required: with_column: str Optional:
|
WordNoMatch()
WordMatch(with_column="forbidden") |
Required: with_column: str Optional:
|
ExactMatch()
ExactMatch(with_column='reference') |
Required: with_column: str Optional:
|
IsValidJSON()
|
Required: n/a Optional:
|
JSONSchemaMatch()
JSONSchemaMatch(expected_schema={"name": str, "age": int}, exact_match=False, validate_types=True) |
Required: expected_schema: Dict[str, type] Optional:
|
JSONMatch()
JSONMatch(with_column="column_2") |
Required: with_column : str Optional:
|
ContainsLink()
|
Required: n/a Optional:
|
IsValidPython()
|
Required: n/a Optional:
|
IsValidSQL()
|
Required: n/a Optional:
|
Computes descriptive text statistics.
Descriptor | Parameters |
---|---|
TextLength()
|
Required: n/a Optional:
|
OOV()
|
Required: n/a Optional:
|
NonLetterCharacterPercentage()
|
Required: n/a Optional:
|
SentenceCount()
|
Required: n/a Optional:
|
WordCount()
|
Required: n/a Optional:
|
Use external LLMs with an evaluation prompt to score text data. (Also known as LLM-as-a-judge method).
Descriptor | Parameters |
---|---|
LLMEval() Scores the text using the user-defined criteria, automatically formatted in a templated evaluation prompt. |
See docs for examples and parameters. |
DeclineLLMEval() Detects texts containing a refusal or a rejection to do something. Returns a label (DECLINE or OK) or score. |
See docs for parameters. |
PIILLMEval() Detects texts containing PII (Personally Identifiable Information). Returns a label (PII or OK) or score. |
See docs for parameters. |
NegativityLLMEval() Detects negative texts (containing critical or pessimistic tone). Returns a label (NEGATIVE or POSITIVE) or score. |
See docs for parameters. |
BiasLLMEval() Detects biased texts (containing prejudice for or against a person or group). Returns a label (BIAS or OK) or score. |
See docs for parameters. |
ToxicityLLMEval() Detects toxic texts (containing harmful, offensive, or derogatory language). Returns a label (TOXICITY or OK) or score. |
See docs for parameters. |
ContextQualityLLMEval() Evaluates if CONTEXT is VALID (has sufficient information to answer the QUESTION) or INVALID (has missing or contradictory information). Returns a label (VALID or INVALID) or score. |
Run the descriptor over the context column and pass the question column as a parameter. See docs for parameters. |
Use pre-trained machine learning models for evaluation.
Descriptor | Parameters |
---|---|
Semantic Similarity()
SemanticSimilarity(with_column="response") |
Required:
|
Sentiment()
|
Required: n/a Optional:
|
HuggingFaceModel() Scores the text using the user-selected HuggingFace model. |
See docs with some example models (classification by topic, emotion, etc.) |
HuggingFaceToxicityModel()
|
Optional:
|
BERTScore()
|
Required:
|
Defaults for Data Drift. By default, all data drift metrics use the Evidently drift detection logic that selects a drift detection method based on feature type and volume. You always need a reference dataset.
To modify the logic or select a different test, you should set data drift parameters or embeddings drift parameters. You can choose from 20+ drift detection methods and optionally pass feature importances.
Metric | Parameters |
---|---|
DatasetDriftMetric()
|
Required: n/a Optional:
|
DataDriftTable()
|
Required: n/a Optional:
|
ColumnDriftMetric()
|
Required:
Optional:
|
EmbeddingsDriftMetric()
|
Required:
Optional:
|
The metrics work both for probabilistic and non-probabilistic classification. All metrics are dataset-level. All metrics require column mapping of target and prediction.
Metric | Parameters |
---|---|
ClassificationDummyMetric() Calculates the quality of the dummy model built on the same data. This can serve as a baseline. |
Required: n/a Optional: n/a |
ClassificationQualityMetric() Calculates various classification performance metrics, including:
|
Required:: n/a Optional:
|
ClassificationClassBalance() Calculates the number of objects for each label. Plots the histogram. |
Required: n/a Optional: n/a |
ClassificationConfusionMatrix() Calculates the TPR, TNR, FPR, FNR, and plots the confusion matrix. |
Required: n/a Optional:
|
ClassificationQualityByClass() Calculates the classification quality metrics for each class. Plots the matrix. |
Required: n/a Optional:
|
ClassificationClassSeparationPlot() Visualization of the predicted probabilities by class. Applicable for probabilistic classification only. |
Required: n/a Optional: n/a |
ClassificationProbDistribution() Visualization of the probability distribution by class. Applicable for probabilistic classification only. |
Required: n/a Optional: n/a |
ClassificationRocCurve() Plots ROC Curve. Applicable for probabilistic classification only. |
Required: n/a Optional: n/a |
ClassificationPRCurve() Plots Precision-Recall Curve. Applicable for probabilistic classification only. |
Required: n/a Optional: n/a |
ClassificationPRTable() Calculates the Precision-Recall table that shows model quality at a different decision threshold. |
Required: n/a Optional: n/a |
ClassificationQualityByFeatureTable() Plots the relationship between feature values and model quality. |
Required: n/a Optional:
|
All metrics are dataset-level. All metrics require column mapping of target and prediction.
Metric | Parameters |
---|---|
RegressionDummyMetric() Calculates the quality of the dummy model built on the same data. This can serve as a baseline. |
Required: n/a Optional: n/a |
RegressionQualityMetric() Calculates various regression performance metrics, including:
|
Required: n/a Optional: n/a |
RegressionPredictedVsActualScatter() Visualizes predicted vs actual values in a scatter plot. |
Required: n/a Optional: n/a |
RegressionPredictedVsActualPlot() Visualizes predicted vs. actual values in a line plot. |
Required: n/a Optional: n/a |
RegressionErrorPlot() Visualizes the model error (predicted - actual) in a line plot. |
Required: n/a Optional: n/a |
RegressionAbsPercentageErrorPlot() Visualizes the absolute percentage error in a line plot. |
Required: n/a Optional: n/a |
RegressionErrorDistribution() Visualizes the distribution of the model error in a histogram. |
Required: n/a Optional: n/a |
RegressionErrorNormality() Visualizes the quantile-quantile plot (Q-Q plot) to estimate value normality. |
Required: n/a Optional: n/a |
RegressionTopErrorMetric() Calculates the regression performance metrics for different groups:
Visualizes the group division on a scatter plot with predicted vs. actual values. |
Required: n/a Optional:
|
RegressionErrorBiasTable() Plots the relationship between feature values and model quality per group (for top-X% error groups, as above). |
Required: n/a Optional:
|
All metrics are dataset-level. Check individual metric descriptions here. All metrics require recommendations column mapping.
Optional shared parameters for multiple metrics:
no_feedback_users: bool = False
. Specifies whether to include the users who did not select any of the items, when computing the quality metric. Default: False.min_rel_score: Optional[int] = None
. Specifies the minimum relevance score to consider relevant when calculating the quality metrics for non-binary targets (e.g., if a target is a rating or a custom score).
Metric | Parameters |
---|---|
RecallTopKMetric() Calculates the recall at k . |
Required:
|
PrecisionTopKMetric() Calculates the precision at k . |
Required:
|
FBetaTopKMetric() Calculates the F-measure at k . |
Required:
|
MAPKMetric() Calculates the Mean Average Precision (MAP) at k . |
Required:
|
MARKMetric() Calculates the Mean Average Recall (MAR) at k . |
Required:
|
NDCGKMetric() Calculates the Normalized Discounted Cumulative Gain at k . |
Required:
|
MRRKMetric() Calculates the Mean Reciprocal Rank (MRR) at k . |
Required:
|
HitRateKMetric() Calculates the hit rate at k : a share of users for which at least one relevant item is included in the K. |
Required:
|
DiversityMetric() Calculates intra-list Diversity at k : diversity of recommendations shown to each user in top-K recommendations, averaged by all users. |
Required:
|
NoveltyMetric() Calculates novelty at k : novelty of recommendations shown to each user in top-K recommendations, averaged by all users.Requires a training dataset. |
Required:
|
SerendipityMetric() Calculates serendipity at k : how unusual the relevant recommendations are in top-K, averaged by all users.Requires a training dataset. |
Required:
|
PersonalizationMetric() Measures the average uniqueness of each user's top-K recommendations. |
Required:
|
PopularityBias() Evaluates the popularity bias in recommendations by computing ARP (average recommendation popularity), Gini index, and coverage. Requires a training dataset. |
Required:
|
ItemBiasMetric() Visualizes the distribution of recommendations by a chosen dimension (column), сomparative to its distribution in the training set. Requires a training dataset. |
Required:
|
UserBiasMetric() Visualizes the distribution of the chosen category (e.g. user characteristic), comparative to its distribution in the training dataset. Requires a training dataset. |
Required:
|
ScoreDistribution() Computes the predicted score entropy. Visualizes the distribution of the scores at k (and all scores, if available).Applies only when the recommendations_type is a score . |
Required:
|
RecCasesTable() Shows the list of recommendations for specific user IDs (or 5 random if not specified). |
Required:
|