Skip to content

Commit a990418

Browse files
committed
wip: add backward incompatibility warning to doc
1 parent 8c1ff19 commit a990418

File tree

1 file changed

+23
-1
lines changed

1 file changed

+23
-1
lines changed

docs/user-guides/community/auto-align.md

Lines changed: 23 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ AutoAlign comes with a library of built-in guardrails that you can easily use:
77
1. [Gender bias Detection](#gender-bias-detection)
88
2. [Harm Detection](#harm-detection)
99
3. [Jailbreak Detection](#jailbreak-detection)
10-
4. [Confidential Detection](#confidential-detection)
10+
4. [Confidential Info Detection](#confidential-info-detection)
1111
5. [Intellectual property detection](#intellectual-property-detection)
1212
6. [Racial bias Detection](#racial-bias-detection)
1313
7. [Tonal Detection](#tonal-detection)
@@ -385,6 +385,10 @@ For intellectual property detection, the matching score has to be following form
385385

386386
### Confidential Info detection
387387

388+
```{warning}
389+
Backward incompatible changes are introduced in v0.12.0 due to AutoAlign API changes
390+
```
391+
388392
The goal of the confidential info detection rail is to determine if the text has any kind of confidential information. This rail can be applied at both input and output.
389393
This guardrail can be added by adding `confidential_info_detection` key in the dictionary under `guardrails_config` section
390394
which is under `input` or `output` section which should be in `autoalign` section in `config.yml`.
@@ -439,6 +443,10 @@ For tonal detection, the matching score has to be following format:
439443

440444
### Toxicity extraction
441445

446+
```{warning}
447+
Backward incompatible changes are introduced in v0.12.0 due to AutoAlign API changes
448+
```
449+
442450
The goal of the toxicity detection rail is to determine if the text has any kind of toxic content. This rail can be applied at both input and output. This guardrail not just detects the toxicity of the text but also extracts toxic phrases from the text.
443451
This guardrail can be added by adding `toxicity_detection` key in the dictionary under `guardrails_config` section
444452
which is under `input` or `output` section which should be in `autoalign` section in `config.yml`.
@@ -485,6 +493,10 @@ define bot refuse to respond
485493

486494
### PII
487495

496+
```{warning}
497+
Backward incompatible changes are introduced in v0.12.0 due to AutoAlign API changes
498+
```
499+
488500
To use AutoAlign's PII (Personal Identifiable Information) module, you have to list the entities that you wish to redact
489501
in `enabled_types` in the dictionary of `guardrails_config` under the key of `pii`; if not listed then all PII types will be redacted.
490502

@@ -558,6 +570,11 @@ Example PII config:
558570
```
559571

560572
### Groundness Check
573+
574+
```{warning}
575+
Backward incompatible changes are introduced in v0.12.0 due to AutoAlign API changes
576+
```
577+
561578
The groundness check needs an input statement (represented as ‘prompt’) as a list of evidence documents.
562579
To use AutoAlign's groundness check module, you have to modify the `config.yml` in the following format:
563580

@@ -636,6 +653,11 @@ The supporting documents or the evidence has to be placed within a `kb` folder w
636653

637654

638655
### Fact Check
656+
657+
```{warning}
658+
Backward incompatible changes are introduced in v0.12.0 due to AutoAlign API changes
659+
```
660+
639661
The fact check uses the bot response and user input prompt to check the factual correctness of the bot response based on the user prompt. Unlike groundness check, fact check does not use a pre-existing internal knowledge base.
640662
To use AutoAlign's fact check module, modify the `config.yml` from example autoalign_factcheck_config.
641663

0 commit comments

Comments
 (0)