Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(presidio_pii_masking.py): allow request level controls for turning on/off pii masking #2037

Merged
merged 1 commit into from
Feb 18, 2024

Conversation

krrishdholakia
Copy link
Contributor

@krrishdholakia krrishdholakia commented Feb 17, 2024

Addresses - #2003

Support 2 request-level PII controls:

  • no-pii: Optional(bool) - Allow user to turn off pii masking per request.
  • output_parse_pii: Optional(bool) - Allow user to turn off pii output parsing per request.

Usage

Step 1. Create key with pii permissions

Set allow_pii_controls to true for a given key. This will allow the user to set request-level PII controls.

curl --location 'http://0.0.0.0:8000/key/generate' \
--header 'Authorization: Bearer my-master-key' \
--header 'Content-Type: application/json' \
--data '{
    "permissions": {"allow_pii_controls": true}
}'

Step 2. Turn off pii output parsing

import os
from openai import OpenAI

client = OpenAI(
    # This is the default and can be omitted
    api_key=os.environ.get("OPENAI_API_KEY"),
        base_url="http://0.0.0.0:8000"
)

chat_completion = client.chat.completions.create(
    messages=[
        {
            "role": "user",
            "content": "My name is Jane Doe, my number is 8382043839",
        }
    ],
    model="gpt-3.5-turbo",
    extra_body={
        "content_safety": {"output_parse_pii": False} 
    }
)

Step 3: See response

{
  "id": "chatcmpl-8c5qbGTILZa1S4CK3b31yj5N40hFN",
  "choices": [
    {
      "finish_reason": "stop",
      "index": 0,
      "message": {
        "content": "Hi [PERSON], what can I help you with?",
        "role": "assistant"
      }
    }
  ],
  "created": 1704089632,
  "model": "gpt-35-turbo",
  "object": "chat.completion",
  "system_fingerprint": null,
  "usage": {
    "completion_tokens": 47,
    "prompt_tokens": 12,
    "total_tokens": 59
  },
  "_response_ms": 1753.426
}

Copy link

railway-app bot commented Feb 17, 2024

This PR is being deployed to Railway 🚅

litellm: ◻️ REMOVED

Copy link

vercel bot commented Feb 17, 2024

The latest updates on your projects. Learn more about Vercel for Git ↗︎

Name Status Preview Comments Updated (UTC)
litellm ✅ Ready (Inspect) Visit Preview 💬 Add feedback Feb 17, 2024 7:06pm
litellm-dashboard ✅ Ready (Inspect) Visit Preview 💬 Add feedback Feb 17, 2024 7:06pm

@krrishdholakia krrishdholakia merged commit 6f77a4a into main Feb 18, 2024
9 checks passed
@krrishdholakia krrishdholakia deleted the litellm_request_level_pii_masking branch February 18, 2024 03:05
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant