You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
To comply with the Government of Canada's Digital Standards, we are implementing potentially abusive user detection insights in our applications. This initiative focuses on addressing security and privacy risks, designing ethical services, and ensuring responsible data stewardship. The detection system utilizes Azure OpenAI service to analyze user requests and identify potentially harmful content or behaviors. When flagged, these instances are summarized in a report available in the Azure OpenAI Studio, enabling proactive management and a safer user environment.
Context
To comply with the Government of Canada's Digital Standards, we are implementing potentially abusive user detection insights in our applications. This initiative focuses on addressing security and privacy risks, designing ethical services, and ensuring responsible data stewardship. The detection system utilizes Azure OpenAI service to analyze user requests and identify potentially harmful content or behaviors. When flagged, these instances are summarized in a report available in the Azure OpenAI Studio, enabling proactive management and a safer user environment.
to discuss
As of now, the requests made from our different applications don't have user information. We would need to implement changes so that calls include the user doing the prompt. We need to evaluate if this is worth considering what the metric actually adds. This is what we would get from adding user information :
https://learn.microsoft.com/en-us/azure/ai-services/openai/how-to/risks-safety-monitor#report-description-1
Take note that it doesn't give the prompt to have an idea if the blocking request is a false postive.
Actions to take
The text was updated successfully, but these errors were encountered: