From 9c710d54a6dd805c0759c8c40799f7592806278f Mon Sep 17 00:00:00 2001 From: lucianommartins Date: Sat, 11 May 2024 00:54:22 +0000 Subject: [PATCH 1/3] Adjust Safety.ipynb quickstart with safety filter details --- quickstarts/Safety.ipynb | 151 ++++++++++++++++----------------------- 1 file changed, 62 insertions(+), 89 deletions(-) diff --git a/quickstarts/Safety.ipynb b/quickstarts/Safety.ipynb index dbc2c211e..b37422281 100644 --- a/quickstarts/Safety.ipynb +++ b/quickstarts/Safety.ipynb @@ -61,7 +61,7 @@ "source": [ "The Gemini API has adjustable safety settings. This notebook walks you through how to use them. You'll write a prompt that's blocked, see the reason why, and then adjust the filters to unblock it.\n", "\n", - "Safety is an important topic, and you can learn more with the links at the end of this notebook. Here, we're focused on the code." + "Safety is an important topic, and you can learn more with the links at the end of this notebook. Here, you will focus on the code." ] }, { @@ -75,6 +75,17 @@ "!pip install -q -U google-generativeai # Install the Python SDK" ] }, + { + "cell_type": "markdown", + "metadata": { + "id": "3VAUtJubX7MG" + }, + "source": [ + "## Import the Gemini python SDK\n", + "\n", + "Once the kernel is restarted, you can import the Gemini SDK:" + ] + }, { "cell_type": "code", "execution_count": null, @@ -116,7 +127,9 @@ "id": "LZfoK3I3hu6V" }, "source": [ - "## Prompt Feedback\n", + "## Send your prompt request to Gemini\n", + "\n", + "Pick the prompt you want to use to test the safety filters settings. An examples could be `Write a list of 5 very rude things that I might say to the universe after stubbing my toe in the dark` which was previously tested and trigger the `HARM_CATEGORY_HARASSMENT` and `HARM_CATEGORY_DANGEROUS_CONTENT` categories.\n", "\n", "The result returned by the [Model.generate_content](https://ai.google.dev/api/python/google/generativeai/GenerativeModel#generate_content) method is a [genai.GenerateContentResponse](https://ai.google.dev/api/python/google/generativeai/types/GenerateContentResponse)." ] @@ -131,7 +144,7 @@ "source": [ "model = genai.GenerativeModel('gemini-1.0-pro')\n", "\n", - "unsafe_prompt = # Put your unsafe prompt here\n", + "unsafe_prompt = \"insert your unsafe prompt here\" # @param {type:\"string\"}\n", "response = model.generate_content(unsafe_prompt)" ] }, @@ -141,11 +154,13 @@ "id": "WR_2A_sxk8sK" }, "source": [ - "This response object gives you safety feedback in two ways:\n", + "This response object gives you safety feedback about the candidate answers Gemini generates to you.\n", "\n", - "* The `prompt_feedback.safety_ratings` attribute contains a list of safety ratings for the input prompt.\n", + "For each candidate answer you need to check `response.candidates.finish_reason`.\n", "\n", - "* If your prompt is blocked, `prompt_feedback.block_reason` field will explain why." + "As you can find on the [Gemini API safety filters documentation](https://ai.google.dev/gemini-api/docs/safety-settings#safety-feedback):\n", + "- if the `candidate.finish_reason` is `FinishReason.STOP` means that your generation request ran successfully\n", + "- if the `candidate.finish_reason` is `FinishReason.SAFETY` means that your generation request was blocked by safety reasons. It also means that the `response.text` structure will be empty." ] }, { @@ -156,7 +171,16 @@ }, "outputs": [], "source": [ - "bool(response.prompt_feedback.block_reason)" + "print(response.candidates[0].finish_reason)" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "XBdqPso3kamW" + }, + "source": [ + "If the `finish_reason` is `FinishReason.SAFETY` you can check which filter caused the block checking the `safety_ratings` list for the candidate answer:" ] }, { @@ -167,27 +191,30 @@ }, "outputs": [], "source": [ - "response.prompt_feedback.safety_ratings" + "print(response.candidates[0].safety_ratings)" ] }, { "cell_type": "markdown", "metadata": { - "id": "72b4a8808bb9" + "id": "z9-SdzjbxWXT" }, "source": [ - "If the prompt is blocked because of the safety ratings, you will not get any candidates in the response:" + "As the request was blocked by the safety filters, the `response.text` field will be empty (as nothing as generated by the model):" ] }, { "cell_type": "code", "execution_count": null, "metadata": { - "id": "f20d9269325d" + "id": "L1Da4cJ3xej3" }, "outputs": [], "source": [ - "response.candidates" + "try:\n", + " print(response.text)\n", + "except:\n", + " print(\"No information generated by the model.\")" ] }, { @@ -196,16 +223,13 @@ "id": "4672af98ac57" }, "source": [ - "### Safety settings" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "2a6229f6d3a1" - }, - "source": [ - "Adjust the safety settings and the prompt is no longer blocked. The Gemini API has four configurable safety settings." + "## Customizing safety settings\n", + "\n", + "Depending on the scenario you are working with, it may be necessary to customize the safety filters behaviors to allow a certain degree of unsafety results.\n", + "\n", + "To make this customization you must define a `safety_settings` dictionary as part of your `model.generate_content()` request. In the example below, all the filters are being set to do not block contents.\n", + "\n", + "**Important:** If the unsafety degree of your prompt is too high, to guarantee the Google commitment with the Responsible AI development and its [AI Principles](https://ai.google/responsibility/principles/), Gemini will avoid generating the results even if you set all the filters to none." ] }, { @@ -229,113 +253,64 @@ { "cell_type": "markdown", "metadata": { - "id": "86c560e0a641" + "id": "564K7R8rwWhs" }, "source": [ - "With the new settings, the `blocked_reason` is no longer set." + "Checking again the `candidate.finish_reason` information, if the request was not too unsafe, it must show now the value as `FinishReason.STOP` which means that the request was successfully processed by Gemini." ] }, { "cell_type": "code", "execution_count": null, "metadata": { - "id": "0c2847c49262" + "id": "LazB08GBpc1w" }, "outputs": [], "source": [ - "bool(response.prompt_feedback.block_reason)" + "print(response.candidates[0].finish_reason)" ] }, { "cell_type": "markdown", "metadata": { - "id": "47298a4eef40" - }, - "source": [ - "And a candidate response is returned." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "028febe8df68" - }, - "outputs": [], - "source": [ - "len(response.candidates)" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "ujVlQoC43N3B" - }, - "source": [ - "You can check `response.text` for the response." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "de8ee74634af" - }, - "outputs": [], - "source": [ - "response.text" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "3d401c247957" - }, - "source": [ - "### Candidate ratings" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "3d306960dffb" + "id": "86c560e0a641" }, "source": [ - "For a prompt that is not blocked, the response object contains a list of `candidate` objects (just 1 for now). Each candidate includes a `finish_reason`:" + "Since the request was successfully generated, you can check the result on the `response.text`:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { - "id": "e49b53f69a2c" + "id": "0c2847c49262" }, "outputs": [], "source": [ - "candidate = response.candidates[0]\n", - "candidate.finish_reason" + "try:\n", + " print(response.text)\n", + "except:\n", + " print(\"No information generated by the model.\")" ] }, { "cell_type": "markdown", "metadata": { - "id": "badddf10089b" + "id": "47298a4eef40" }, "source": [ - "`FinishReason.STOP` means that the model finished its output normally.\n", - "\n", - "`FinishReason.SAFETY` means the candidate's `safety_ratings` exceeded the request's `safety_settings` threshold." + "And if you check the safety filters ratings, as you set all filters to be ignored, no filtering category was trigerred:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { - "id": "2b60d9f96af0" + "id": "028febe8df68" }, "outputs": [], "source": [ - "candidate.safety_ratings" + "print(response.candidates[0].safety_ratings)" ] }, { @@ -352,12 +327,10 @@ "\n", "There are 4 configurable safety settings for the Gemini API:\n", "* `HARM_CATEGORY_DANGEROUS`\n", - "*`HARM_CATEGORY_HARASSMENT`\n", + "* `HARM_CATEGORY_HARASSMENT`\n", "* `HARM_CATEGORY_SEXUALLY_EXPLICIT`\n", "* `HARM_CATEGORY_DANGEROUS`\n", "\n", - "Note: while the API [reference](https://ai.google.dev/api/python/google/ai/generativelanguage/HarmCategory) includes others, the remainder are for older models.\n", - "\n", "* You can refer to the safety settings using either their full name, or the aliases like `DANGEROUS` used in the Python code above.\n", "\n", "Safety settings can be set in the [genai.GenerativeModel](https://ai.google.dev/api/python/google/generativeai/GenerativeModel) constructor.\n", From 9adb22feee1ce7bafc71002d94d14a4f417a3c1a Mon Sep 17 00:00:00 2001 From: lucianommartins Date: Mon, 13 May 2024 16:58:53 +0000 Subject: [PATCH 2/3] Fixes to the concerns raised during Safety.ipynb review process --- quickstarts/Safety.ipynb | 783 ++++++++++++++++++++------------------- 1 file changed, 405 insertions(+), 378 deletions(-) diff --git a/quickstarts/Safety.ipynb b/quickstarts/Safety.ipynb index b37422281..55dd2f9dc 100644 --- a/quickstarts/Safety.ipynb +++ b/quickstarts/Safety.ipynb @@ -1,381 +1,408 @@ { - "cells": [ - { - "cell_type": "markdown", - "metadata": { - "id": "Tce3stUlHN0L" - }, - "source": [ - "##### Copyright 2024 Google LLC." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "cellView": "form", - "id": "tuOe1ymfHZPu" - }, - "outputs": [], - "source": [ - "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n", - "# you may not use this file except in compliance with the License.\n", - "# You may obtain a copy of the License at\n", - "#\n", - "# https://www.apache.org/licenses/LICENSE-2.0\n", - "#\n", - "# Unless required by applicable law or agreed to in writing, software\n", - "# distributed under the License is distributed on an \"AS IS\" BASIS,\n", - "# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n", - "# See the License for the specific language governing permissions and\n", - "# limitations under the License." - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "yeadDkMiISin" - }, - "source": [ - "# Gemini API: Safety Quickstart" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "lEXQ3OwKIa-O" - }, - "source": [ - "\n", - " \n", - "
\n", - " Run in Google Colab\n", - "
" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "uOxMUKTxR-_j" - }, - "source": [ - "The Gemini API has adjustable safety settings. This notebook walks you through how to use them. You'll write a prompt that's blocked, see the reason why, and then adjust the filters to unblock it.\n", - "\n", - "Safety is an important topic, and you can learn more with the links at the end of this notebook. Here, you will focus on the code." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "9OEoeosRTv-5" - }, - "outputs": [], - "source": [ - "!pip install -q -U google-generativeai # Install the Python SDK" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "3VAUtJubX7MG" - }, - "source": [ - "## Import the Gemini python SDK\n", - "\n", - "Once the kernel is restarted, you can import the Gemini SDK:" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "TS9l5igubpHO" - }, - "outputs": [], - "source": [ - "import google.generativeai as genai" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "gHYFrFPjSGNq" - }, - "source": [ - "## Set up your API key\n", - "\n", - "To run the following cell, your API key must be stored it in a Colab Secret named `GOOGLE_API_KEY`. If you don't already have an API key, or you're not sure how to create a Colab Secret, see the [Authentication](https://github.com/google-gemini/cookbook/blob/main/quickstarts/Authentication.ipynb) quickstart for an example." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "ab9ASynfcIZn" - }, - "outputs": [], - "source": [ - "from google.colab import userdata\n", - "GOOGLE_API_KEY=userdata.get('GOOGLE_API_KEY')\n", - "genai.configure(api_key=GOOGLE_API_KEY)" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "LZfoK3I3hu6V" - }, - "source": [ - "## Send your prompt request to Gemini\n", - "\n", - "Pick the prompt you want to use to test the safety filters settings. An examples could be `Write a list of 5 very rude things that I might say to the universe after stubbing my toe in the dark` which was previously tested and trigger the `HARM_CATEGORY_HARASSMENT` and `HARM_CATEGORY_DANGEROUS_CONTENT` categories.\n", - "\n", - "The result returned by the [Model.generate_content](https://ai.google.dev/api/python/google/generativeai/GenerativeModel#generate_content) method is a [genai.GenerateContentResponse](https://ai.google.dev/api/python/google/generativeai/types/GenerateContentResponse)." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "2bcfnGEviwTI" - }, - "outputs": [], - "source": [ - "model = genai.GenerativeModel('gemini-1.0-pro')\n", - "\n", - "unsafe_prompt = \"insert your unsafe prompt here\" # @param {type:\"string\"}\n", - "response = model.generate_content(unsafe_prompt)" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "WR_2A_sxk8sK" - }, - "source": [ - "This response object gives you safety feedback about the candidate answers Gemini generates to you.\n", - "\n", - "For each candidate answer you need to check `response.candidates.finish_reason`.\n", - "\n", - "As you can find on the [Gemini API safety filters documentation](https://ai.google.dev/gemini-api/docs/safety-settings#safety-feedback):\n", - "- if the `candidate.finish_reason` is `FinishReason.STOP` means that your generation request ran successfully\n", - "- if the `candidate.finish_reason` is `FinishReason.SAFETY` means that your generation request was blocked by safety reasons. It also means that the `response.text` structure will be empty." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "8887de812dc0" - }, - "outputs": [], - "source": [ - "print(response.candidates[0].finish_reason)" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "XBdqPso3kamW" - }, - "source": [ - "If the `finish_reason` is `FinishReason.SAFETY` you can check which filter caused the block checking the `safety_ratings` list for the candidate answer:" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "he-OfzBbhACQ" - }, - "outputs": [], - "source": [ - "print(response.candidates[0].safety_ratings)" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "z9-SdzjbxWXT" - }, - "source": [ - "As the request was blocked by the safety filters, the `response.text` field will be empty (as nothing as generated by the model):" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "L1Da4cJ3xej3" - }, - "outputs": [], - "source": [ - "try:\n", - " print(response.text)\n", - "except:\n", - " print(\"No information generated by the model.\")" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "4672af98ac57" - }, - "source": [ - "## Customizing safety settings\n", - "\n", - "Depending on the scenario you are working with, it may be necessary to customize the safety filters behaviors to allow a certain degree of unsafety results.\n", - "\n", - "To make this customization you must define a `safety_settings` dictionary as part of your `model.generate_content()` request. In the example below, all the filters are being set to do not block contents.\n", - "\n", - "**Important:** If the unsafety degree of your prompt is too high, to guarantee the Google commitment with the Responsible AI development and its [AI Principles](https://ai.google/responsibility/principles/), Gemini will avoid generating the results even if you set all the filters to none." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "338fb9a6af78" - }, - "outputs": [], - "source": [ - "response = model.generate_content(\n", - " unsafe_prompt,\n", - " safety_settings={\n", - " 'HATE': 'BLOCK_NONE',\n", - " 'HARASSMENT': 'BLOCK_NONE',\n", - " 'SEXUAL' : 'BLOCK_NONE',\n", - " 'DANGEROUS' : 'BLOCK_NONE'\n", - " })" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "564K7R8rwWhs" - }, - "source": [ - "Checking again the `candidate.finish_reason` information, if the request was not too unsafe, it must show now the value as `FinishReason.STOP` which means that the request was successfully processed by Gemini." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "LazB08GBpc1w" - }, - "outputs": [], - "source": [ - "print(response.candidates[0].finish_reason)" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "86c560e0a641" - }, - "source": [ - "Since the request was successfully generated, you can check the result on the `response.text`:" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "0c2847c49262" - }, - "outputs": [], - "source": [ - "try:\n", - " print(response.text)\n", - "except:\n", - " print(\"No information generated by the model.\")" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "47298a4eef40" - }, - "source": [ - "And if you check the safety filters ratings, as you set all filters to be ignored, no filtering category was trigerred:" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "028febe8df68" - }, - "outputs": [], - "source": [ - "print(response.candidates[0].safety_ratings)" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "n1UdbxVt3ysY" - }, - "source": [ - "## Learning more\n", - "\n", - "Learn more with these articles on [safety guidance](https://ai.google.dev/docs/safety_guidance) and [safety settings](https://ai.google.dev/docs/safety_setting_gemini).\n", - "\n", - "## Useful API references\n", - "\n", - "There are 4 configurable safety settings for the Gemini API:\n", - "* `HARM_CATEGORY_DANGEROUS`\n", - "* `HARM_CATEGORY_HARASSMENT`\n", - "* `HARM_CATEGORY_SEXUALLY_EXPLICIT`\n", - "* `HARM_CATEGORY_DANGEROUS`\n", - "\n", - "* You can refer to the safety settings using either their full name, or the aliases like `DANGEROUS` used in the Python code above.\n", - "\n", - "Safety settings can be set in the [genai.GenerativeModel](https://ai.google.dev/api/python/google/generativeai/GenerativeModel) constructor.\n", - "\n", - "* They can also be passed on each request to [GenerativeModel.generate_content](https://ai.google.dev/api/python/google/generativeai/GenerativeModel#generate_content) or [ChatSession.send_message](https://ai.google.dev/api/python/google/generativeai/ChatSession?hl=en#send_message).\n", - "\n", - "- The [genai.GenerateContentResponse](https://ai.google.dev/api/python/google/ai/generativelanguage/GenerateContentResponse) returns [SafetyRatings](https://ai.google.dev/api/python/google/ai/generativelanguage/SafetyRating) for the prompt in the [GenerateContentResponse.prompt_feedback](https://ai.google.dev/api/python/google/ai/generativelanguage/GenerateContentResponse/PromptFeedback), and for each [Candidate](https://ai.google.dev/api/python/google/ai/generativelanguage/Candidate) in the `safety_ratings` attribute.\n", - "\n", - "- A [glm.SafetySetting](https://ai.google.dev/api/python/google/ai/generativelanguage/SafetySetting) contains: [glm.HarmCategory](https://ai.google.dev/api/python/google/ai/generativelanguage/HarmCategory) and a [glm.HarmBlockThreshold](https://ai.google.dev/api/python/google/generativeai/types/HarmBlockThreshold)\n", - "\n", - "- A [glm.SafetyRating](https://ai.google.dev/api/python/google/ai/generativelanguage/SafetyRating) contains a [HarmCategory](https://ai.google.dev/api/python/google/ai/generativelanguage/HarmCategory) and a [HarmProbability](https://ai.google.dev/api/python/google/generativeai/types/HarmProbability)\n", - "\n", - "The [glm.HarmCategory](https://ai.google.dev/api/python/google/ai/generativelanguage/HarmCategory) enum includes both the categories for PaLM and Gemini models.\n", - "\n", - "- When specifying enum values the SDK will accept the enum values themselves, or their integer or string representations.\n", - "\n", - "- The SDK will also accept abbreviated string representations: `[\"HARM_CATEGORY_DANGEROUS_CONTENT\", \"DANGEROUS_CONTENT\", \"DANGEROUS\"]` are all valid. Strings are case insensitive." - ] - } - ], - "metadata": { - "colab": { - "name": "Safety.ipynb", - "toc_visible": true - }, - "google": { - "image_path": "/static/site-assets/images/docs/logo-python.svg", - "keywords": [ - "examples", - "gemini", - "beginner", - "googleai", - "quickstart", - "python", - "text", - "chat", - "vision", - "embed" - ] - }, - "kernelspec": { - "display_name": "Python 3", - "name": "python3" - } + "cells": [ + { + "cell_type": "markdown", + "metadata": { + "id": "Tce3stUlHN0L" + }, + "source": [ + "##### Copyright 2024 Google LLC." + ] }, - "nbformat": 4, - "nbformat_minor": 0 + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "cellView": "form", + "id": "tuOe1ymfHZPu" + }, + "outputs": [], + "source": [ + "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n", + "# you may not use this file except in compliance with the License.\n", + "# You may obtain a copy of the License at\n", + "#\n", + "# https://www.apache.org/licenses/LICENSE-2.0\n", + "#\n", + "# Unless required by applicable law or agreed to in writing, software\n", + "# distributed under the License is distributed on an \"AS IS\" BASIS,\n", + "# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n", + "# See the License for the specific language governing permissions and\n", + "# limitations under the License." + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "yeadDkMiISin" + }, + "source": [ + "# Gemini API: Safety Quickstart" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "lEXQ3OwKIa-O" + }, + "source": [ + "\n", + " \n", + "
\n", + " Run in Google Colab\n", + "
" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "uOxMUKTxR-_j" + }, + "source": [ + "The Gemini API has adjustable safety settings. This notebook walks you through how to use them. You'll write a prompt that's blocked, see the reason why, and then adjust the filters to unblock it.\n", + "\n", + "Safety is an important topic, and you can learn more with the links at the end of this notebook. Here, you will focus on the code." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "9OEoeosRTv-5" + }, + "outputs": [], + "source": [ + "!pip install -q -U google-generativeai # Install the Python SDK" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "3VAUtJubX7MG" + }, + "source": [ + "## Import the Gemini python SDK\n", + "\n", + "Once the kernel is restarted, you can import the Gemini SDK:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "TS9l5igubpHO" + }, + "outputs": [], + "source": [ + "import google.generativeai as genai" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "gHYFrFPjSGNq" + }, + "source": [ + "## Set up your API key\n", + "\n", + "To run the following cell, your API key must be stored it in a Colab Secret named `GOOGLE_API_KEY`. If you don't already have an API key, or you're not sure how to create a Colab Secret, see the [Authentication](https://github.com/google-gemini/cookbook/blob/main/quickstarts/Authentication.ipynb) quickstart for an example." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "ab9ASynfcIZn" + }, + "outputs": [], + "source": [ + "from google.colab import userdata\n", + "GOOGLE_API_KEY=userdata.get('GOOGLE_API_KEY')\n", + "genai.configure(api_key=GOOGLE_API_KEY)" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "LZfoK3I3hu6V" + }, + "source": [ + "## Send your prompt request to Gemini\n", + "\n", + "Pick the prompt you want to use to test the safety filters settings. An examples could be `Write a list of 5 very rude things that I might say to the universe after stubbing my toe in the dark` which was previously tested and trigger the `HARM_CATEGORY_HARASSMENT` and `HARM_CATEGORY_DANGEROUS_CONTENT` categories.\n", + "\n", + "The result returned by the [Model.generate_content](https://ai.google.dev/api/python/google/generativeai/GenerativeModel#generate_content) method is a [genai.GenerateContentResponse](https://ai.google.dev/api/python/google/generativeai/types/GenerateContentResponse)." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "2bcfnGEviwTI", + "tags": [] + }, + "outputs": [], + "source": [ + "model = genai.GenerativeModel('gemini-1.0-pro')\n", + "\n", + "unsafe_prompt = \"Write a list of 5 very rude things that I might say to the universe after stubbing my toe in the dark\"\n", + "response = model.generate_content(unsafe_prompt)" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "WR_2A_sxk8sK" + }, + "source": [ + "This response object gives you safety feedback about the candidate answers Gemini generates to you.\n", + "\n", + "For each candidate answer you need to check `response.candidates.finish_reason`.\n", + "\n", + "As you can find on the [Gemini API safety filters documentation](https://ai.google.dev/gemini-api/docs/safety-settings#safety-feedback):\n", + "- if the `candidate.finish_reason` is `FinishReason.STOP` means that your generation request ran successfully\n", + "- if the `candidate.finish_reason` is `FinishReason.SAFETY` means that your generation request was blocked by safety reasons. It also means that the `response.text` structure will be empty." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "8887de812dc0" + }, + "outputs": [], + "source": [ + "print(response.candidates[0].finish_reason)" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "XBdqPso3kamW" + }, + "source": [ + "If the `finish_reason` is `FinishReason.SAFETY` you can check which filter caused the block checking the `safety_ratings` list for the candidate answer:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "he-OfzBbhACQ" + }, + "outputs": [], + "source": [ + "print(response.candidates[0].safety_ratings)" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "z9-SdzjbxWXT" + }, + "source": [ + "As the request was blocked by the safety filters, the `response.text` field will be empty (as nothing as generated by the model):" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "L1Da4cJ3xej3" + }, + "outputs": [], + "source": [ + "try:\n", + " print(response.text)\n", + "except:\n", + " print(\"No information generated by the model.\")" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "4672af98ac57" + }, + "source": [ + "## Customizing safety settings\n", + "\n", + "Depending on the scenario you are working with, it may be necessary to customize the safety filters behaviors to allow a certain degree of unsafety results.\n", + "\n", + "To make this customization you must define a `safety_settings` dictionary as part of your `model.generate_content()` request. In the example below, all the filters are being set to do not block contents.\n", + "\n", + "**Important:** To guarantee the Google commitment with the Responsible AI development and its [AI Principles](https://ai.google/responsibility/principles/), for some prompts Gemini will avoid generating the results even if you set all the filters to none." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "338fb9a6af78" + }, + "outputs": [], + "source": [ + "response = model.generate_content(\n", + " unsafe_prompt,\n", + " safety_settings={\n", + " 'HATE': 'BLOCK_NONE',\n", + " 'HARASSMENT': 'BLOCK_NONE',\n", + " 'SEXUAL' : 'BLOCK_NONE',\n", + " 'DANGEROUS' : 'BLOCK_NONE'\n", + " })" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "564K7R8rwWhs" + }, + "source": [ + "Checking again the `candidate.finish_reason` information, if the request was not too unsafe, it must show now the value as `FinishReason.STOP` which means that the request was successfully processed by Gemini." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "LazB08GBpc1w" + }, + "outputs": [], + "source": [ + "print(response.candidates[0].finish_reason)" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "86c560e0a641" + }, + "source": [ + "Since the request was successfully generated, you can check the result on the `response.text`:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "0c2847c49262" + }, + "outputs": [], + "source": [ + "try:\n", + " print(response.text)\n", + "except:\n", + " print(\"No information generated by the model.\")" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "47298a4eef40" + }, + "source": [ + "And if you check the safety filters ratings, as you set all filters to be ignored, no filtering category was trigerred:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "028febe8df68" + }, + "outputs": [], + "source": [ + "print(response.candidates[0].safety_ratings)" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "n1UdbxVt3ysY" + }, + "source": [ + "## Learning more\n", + "\n", + "Learn more with these articles on [safety guidance](https://ai.google.dev/docs/safety_guidance) and [safety settings](https://ai.google.dev/docs/safety_setting_gemini).\n", + "\n", + "## Useful API references\n", + "\n", + "There are 4 configurable safety settings for the Gemini API:\n", + "* `HARM_CATEGORY_DANGEROUS`\n", + "* `HARM_CATEGORY_HARASSMENT`\n", + "* `HARM_CATEGORY_SEXUALLY_EXPLICIT`\n", + "* `HARM_CATEGORY_DANGEROUS`\n", + "\n", + "You can refer to the safety settings using either their full name, or the aliases like `DANGEROUS` used in the Python code above.\n", + "\n", + "Safety settings can be set in the [genai.GenerativeModel](https://ai.google.dev/api/python/google/generativeai/GenerativeModel) constructor.\n", + "\n", + "* They can also be passed on each request to [GenerativeModel.generate_content](https://ai.google.dev/api/python/google/generativeai/GenerativeModel#generate_content) or [ChatSession.send_message](https://ai.google.dev/api/python/google/generativeai/ChatSession?hl=en#send_message).\n", + "\n", + "- The [genai.GenerateContentResponse](https://ai.google.dev/api/python/google/ai/generativelanguage/GenerateContentResponse) returns [SafetyRatings](https://ai.google.dev/api/python/google/ai/generativelanguage/SafetyRating) for the prompt in the [GenerateContentResponse.prompt_feedback](https://ai.google.dev/api/python/google/ai/generativelanguage/GenerateContentResponse/PromptFeedback), and for each [Candidate](https://ai.google.dev/api/python/google/ai/generativelanguage/Candidate) in the `safety_ratings` attribute.\n", + "\n", + "- A [glm.SafetySetting](https://ai.google.dev/api/python/google/ai/generativelanguage/SafetySetting) contains: [glm.HarmCategory](https://ai.google.dev/api/python/google/ai/generativelanguage/HarmCategory) and a [glm.HarmBlockThreshold](https://ai.google.dev/api/python/google/generativeai/types/HarmBlockThreshold)\n", + "\n", + "- A [glm.SafetyRating](https://ai.google.dev/api/python/google/ai/generativelanguage/SafetyRating) contains a [HarmCategory](https://ai.google.dev/api/python/google/ai/generativelanguage/HarmCategory) and a [HarmProbability](https://ai.google.dev/api/python/google/generativeai/types/HarmProbability)\n", + "\n", + "The [glm.HarmCategory](https://ai.google.dev/api/python/google/ai/generativelanguage/HarmCategory) enum includes both the categories for PaLM and Gemini models.\n", + "\n", + "- When specifying enum values the SDK will accept the enum values themselves, or their integer or string representations.\n", + "\n", + "- The SDK will also accept abbreviated string representations: `[\"HARM_CATEGORY_DANGEROUS_CONTENT\", \"DANGEROUS_CONTENT\", \"DANGEROUS\"]` are all valid. Strings are case insensitive." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [] + } + ], + "metadata": { + "colab": { + "name": "Safety.ipynb", + "toc_visible": true + }, + "environment": { + "kernel": "python3", + "name": "tf2-cpu.2-11.m120", + "type": "gcloud", + "uri": "us-docker.pkg.dev/deeplearning-platform-release/gcr.io/tf2-cpu.2-11:m120" + }, + "google": { + "image_path": "/static/site-assets/images/docs/logo-python.svg", + "keywords": [ + "examples", + "gemini", + "beginner", + "googleai", + "quickstart", + "python", + "text", + "chat", + "vision", + "embed" + ] + }, + "kernelspec": { + "display_name": "Python 3 (Local)", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.10.14" + } + }, + "nbformat": 4, + "nbformat_minor": 4 } From 9fbf57c9bf93c2499b7b3dde5f0d5bc252f04927 Mon Sep 17 00:00:00 2001 From: lucianommartins Date: Mon, 13 May 2024 18:25:10 +0000 Subject: [PATCH 3/3] formatting Safety.ipynb notebook --- quickstarts/Safety.ipynb | 783 +++++++++++++++++++-------------------- 1 file changed, 378 insertions(+), 405 deletions(-) diff --git a/quickstarts/Safety.ipynb b/quickstarts/Safety.ipynb index 55dd2f9dc..6ccbc13bf 100644 --- a/quickstarts/Safety.ipynb +++ b/quickstarts/Safety.ipynb @@ -1,408 +1,381 @@ { - "cells": [ - { - "cell_type": "markdown", - "metadata": { - "id": "Tce3stUlHN0L" - }, - "source": [ - "##### Copyright 2024 Google LLC." - ] + "cells": [ + { + "cell_type": "markdown", + "metadata": { + "id": "Tce3stUlHN0L" + }, + "source": [ + "##### Copyright 2024 Google LLC." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "cellView": "form", + "id": "tuOe1ymfHZPu" + }, + "outputs": [], + "source": [ + "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n", + "# you may not use this file except in compliance with the License.\n", + "# You may obtain a copy of the License at\n", + "#\n", + "# https://www.apache.org/licenses/LICENSE-2.0\n", + "#\n", + "# Unless required by applicable law or agreed to in writing, software\n", + "# distributed under the License is distributed on an \"AS IS\" BASIS,\n", + "# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n", + "# See the License for the specific language governing permissions and\n", + "# limitations under the License." + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "yeadDkMiISin" + }, + "source": [ + "# Gemini API: Safety Quickstart" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "lEXQ3OwKIa-O" + }, + "source": [ + "\n", + " \n", + "
\n", + " Run in Google Colab\n", + "
" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "uOxMUKTxR-_j" + }, + "source": [ + "The Gemini API has adjustable safety settings. This notebook walks you through how to use them. You'll write a prompt that's blocked, see the reason why, and then adjust the filters to unblock it.\n", + "\n", + "Safety is an important topic, and you can learn more with the links at the end of this notebook. Here, you will focus on the code." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "9OEoeosRTv-5" + }, + "outputs": [], + "source": [ + "!pip install -q -U google-generativeai # Install the Python SDK" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "3VAUtJubX7MG" + }, + "source": [ + "## Import the Gemini python SDK\n", + "\n", + "Once the kernel is restarted, you can import the Gemini SDK:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "TS9l5igubpHO" + }, + "outputs": [], + "source": [ + "import google.generativeai as genai" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "gHYFrFPjSGNq" + }, + "source": [ + "## Set up your API key\n", + "\n", + "To run the following cell, your API key must be stored it in a Colab Secret named `GOOGLE_API_KEY`. If you don't already have an API key, or you're not sure how to create a Colab Secret, see the [Authentication](https://github.com/google-gemini/cookbook/blob/main/quickstarts/Authentication.ipynb) quickstart for an example." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "ab9ASynfcIZn" + }, + "outputs": [], + "source": [ + "from google.colab import userdata\n", + "GOOGLE_API_KEY=userdata.get('GOOGLE_API_KEY')\n", + "genai.configure(api_key=GOOGLE_API_KEY)" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "LZfoK3I3hu6V" + }, + "source": [ + "## Send your prompt request to Gemini\n", + "\n", + "Pick the prompt you want to use to test the safety filters settings. An examples could be `Write a list of 5 very rude things that I might say to the universe after stubbing my toe in the dark` which was previously tested and trigger the `HARM_CATEGORY_HARASSMENT` and `HARM_CATEGORY_DANGEROUS_CONTENT` categories.\n", + "\n", + "The result returned by the [Model.generate_content](https://ai.google.dev/api/python/google/generativeai/GenerativeModel#generate_content) method is a [genai.GenerateContentResponse](https://ai.google.dev/api/python/google/generativeai/types/GenerateContentResponse)." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "2bcfnGEviwTI" + }, + "outputs": [], + "source": [ + "model = genai.GenerativeModel('gemini-1.0-pro')\n", + "\n", + "unsafe_prompt = \"Write a list of 5 very rude things that I might say to the universe after stubbing my toe in the dark\"\n", + "response = model.generate_content(unsafe_prompt)" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "WR_2A_sxk8sK" + }, + "source": [ + "This response object gives you safety feedback about the candidate answers Gemini generates to you.\n", + "\n", + "For each candidate answer you need to check `response.candidates.finish_reason`.\n", + "\n", + "As you can find on the [Gemini API safety filters documentation](https://ai.google.dev/gemini-api/docs/safety-settings#safety-feedback):\n", + "- if the `candidate.finish_reason` is `FinishReason.STOP` means that your generation request ran successfully\n", + "- if the `candidate.finish_reason` is `FinishReason.SAFETY` means that your generation request was blocked by safety reasons. It also means that the `response.text` structure will be empty." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "8887de812dc0" + }, + "outputs": [], + "source": [ + "print(response.candidates[0].finish_reason)" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "XBdqPso3kamW" + }, + "source": [ + "If the `finish_reason` is `FinishReason.SAFETY` you can check which filter caused the block checking the `safety_ratings` list for the candidate answer:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "he-OfzBbhACQ" + }, + "outputs": [], + "source": [ + "print(response.candidates[0].safety_ratings)" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "z9-SdzjbxWXT" + }, + "source": [ + "As the request was blocked by the safety filters, the `response.text` field will be empty (as nothing as generated by the model):" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "L1Da4cJ3xej3" + }, + "outputs": [], + "source": [ + "try:\n", + " print(response.text)\n", + "except:\n", + " print(\"No information generated by the model.\")" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "4672af98ac57" + }, + "source": [ + "## Customizing safety settings\n", + "\n", + "Depending on the scenario you are working with, it may be necessary to customize the safety filters behaviors to allow a certain degree of unsafety results.\n", + "\n", + "To make this customization you must define a `safety_settings` dictionary as part of your `model.generate_content()` request. In the example below, all the filters are being set to do not block contents.\n", + "\n", + "**Important:** To guarantee the Google commitment with the Responsible AI development and its [AI Principles](https://ai.google/responsibility/principles/), for some prompts Gemini will avoid generating the results even if you set all the filters to none." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "338fb9a6af78" + }, + "outputs": [], + "source": [ + "response = model.generate_content(\n", + " unsafe_prompt,\n", + " safety_settings={\n", + " 'HATE': 'BLOCK_NONE',\n", + " 'HARASSMENT': 'BLOCK_NONE',\n", + " 'SEXUAL' : 'BLOCK_NONE',\n", + " 'DANGEROUS' : 'BLOCK_NONE'\n", + " })" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "564K7R8rwWhs" + }, + "source": [ + "Checking again the `candidate.finish_reason` information, if the request was not too unsafe, it must show now the value as `FinishReason.STOP` which means that the request was successfully processed by Gemini." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "LazB08GBpc1w" + }, + "outputs": [], + "source": [ + "print(response.candidates[0].finish_reason)" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "86c560e0a641" + }, + "source": [ + "Since the request was successfully generated, you can check the result on the `response.text`:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "0c2847c49262" + }, + "outputs": [], + "source": [ + "try:\n", + " print(response.text)\n", + "except:\n", + " print(\"No information generated by the model.\")" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "47298a4eef40" + }, + "source": [ + "And if you check the safety filters ratings, as you set all filters to be ignored, no filtering category was trigerred:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "028febe8df68" + }, + "outputs": [], + "source": [ + "print(response.candidates[0].safety_ratings)" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "n1UdbxVt3ysY" + }, + "source": [ + "## Learning more\n", + "\n", + "Learn more with these articles on [safety guidance](https://ai.google.dev/docs/safety_guidance) and [safety settings](https://ai.google.dev/docs/safety_setting_gemini).\n", + "\n", + "## Useful API references\n", + "\n", + "There are 4 configurable safety settings for the Gemini API:\n", + "* `HARM_CATEGORY_DANGEROUS`\n", + "* `HARM_CATEGORY_HARASSMENT`\n", + "* `HARM_CATEGORY_SEXUALLY_EXPLICIT`\n", + "* `HARM_CATEGORY_DANGEROUS`\n", + "\n", + "You can refer to the safety settings using either their full name, or the aliases like `DANGEROUS` used in the Python code above.\n", + "\n", + "Safety settings can be set in the [genai.GenerativeModel](https://ai.google.dev/api/python/google/generativeai/GenerativeModel) constructor.\n", + "\n", + "* They can also be passed on each request to [GenerativeModel.generate_content](https://ai.google.dev/api/python/google/generativeai/GenerativeModel#generate_content) or [ChatSession.send_message](https://ai.google.dev/api/python/google/generativeai/ChatSession?hl=en#send_message).\n", + "\n", + "- The [genai.GenerateContentResponse](https://ai.google.dev/api/python/google/ai/generativelanguage/GenerateContentResponse) returns [SafetyRatings](https://ai.google.dev/api/python/google/ai/generativelanguage/SafetyRating) for the prompt in the [GenerateContentResponse.prompt_feedback](https://ai.google.dev/api/python/google/ai/generativelanguage/GenerateContentResponse/PromptFeedback), and for each [Candidate](https://ai.google.dev/api/python/google/ai/generativelanguage/Candidate) in the `safety_ratings` attribute.\n", + "\n", + "- A [glm.SafetySetting](https://ai.google.dev/api/python/google/ai/generativelanguage/SafetySetting) contains: [glm.HarmCategory](https://ai.google.dev/api/python/google/ai/generativelanguage/HarmCategory) and a [glm.HarmBlockThreshold](https://ai.google.dev/api/python/google/generativeai/types/HarmBlockThreshold)\n", + "\n", + "- A [glm.SafetyRating](https://ai.google.dev/api/python/google/ai/generativelanguage/SafetyRating) contains a [HarmCategory](https://ai.google.dev/api/python/google/ai/generativelanguage/HarmCategory) and a [HarmProbability](https://ai.google.dev/api/python/google/generativeai/types/HarmProbability)\n", + "\n", + "The [glm.HarmCategory](https://ai.google.dev/api/python/google/ai/generativelanguage/HarmCategory) enum includes both the categories for PaLM and Gemini models.\n", + "\n", + "- When specifying enum values the SDK will accept the enum values themselves, or their integer or string representations.\n", + "\n", + "- The SDK will also accept abbreviated string representations: `[\"HARM_CATEGORY_DANGEROUS_CONTENT\", \"DANGEROUS_CONTENT\", \"DANGEROUS\"]` are all valid. Strings are case insensitive." + ] + } + ], + "metadata": { + "colab": { + "name": "Safety.ipynb", + "toc_visible": true + }, + "google": { + "image_path": "/static/site-assets/images/docs/logo-python.svg", + "keywords": [ + "examples", + "gemini", + "beginner", + "googleai", + "quickstart", + "python", + "text", + "chat", + "vision", + "embed" + ] + }, + "kernelspec": { + "display_name": "Python 3", + "name": "python3" + } }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "cellView": "form", - "id": "tuOe1ymfHZPu" - }, - "outputs": [], - "source": [ - "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n", - "# you may not use this file except in compliance with the License.\n", - "# You may obtain a copy of the License at\n", - "#\n", - "# https://www.apache.org/licenses/LICENSE-2.0\n", - "#\n", - "# Unless required by applicable law or agreed to in writing, software\n", - "# distributed under the License is distributed on an \"AS IS\" BASIS,\n", - "# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n", - "# See the License for the specific language governing permissions and\n", - "# limitations under the License." - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "yeadDkMiISin" - }, - "source": [ - "# Gemini API: Safety Quickstart" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "lEXQ3OwKIa-O" - }, - "source": [ - "\n", - " \n", - "
\n", - " Run in Google Colab\n", - "
" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "uOxMUKTxR-_j" - }, - "source": [ - "The Gemini API has adjustable safety settings. This notebook walks you through how to use them. You'll write a prompt that's blocked, see the reason why, and then adjust the filters to unblock it.\n", - "\n", - "Safety is an important topic, and you can learn more with the links at the end of this notebook. Here, you will focus on the code." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "9OEoeosRTv-5" - }, - "outputs": [], - "source": [ - "!pip install -q -U google-generativeai # Install the Python SDK" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "3VAUtJubX7MG" - }, - "source": [ - "## Import the Gemini python SDK\n", - "\n", - "Once the kernel is restarted, you can import the Gemini SDK:" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "TS9l5igubpHO" - }, - "outputs": [], - "source": [ - "import google.generativeai as genai" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "gHYFrFPjSGNq" - }, - "source": [ - "## Set up your API key\n", - "\n", - "To run the following cell, your API key must be stored it in a Colab Secret named `GOOGLE_API_KEY`. If you don't already have an API key, or you're not sure how to create a Colab Secret, see the [Authentication](https://github.com/google-gemini/cookbook/blob/main/quickstarts/Authentication.ipynb) quickstart for an example." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "ab9ASynfcIZn" - }, - "outputs": [], - "source": [ - "from google.colab import userdata\n", - "GOOGLE_API_KEY=userdata.get('GOOGLE_API_KEY')\n", - "genai.configure(api_key=GOOGLE_API_KEY)" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "LZfoK3I3hu6V" - }, - "source": [ - "## Send your prompt request to Gemini\n", - "\n", - "Pick the prompt you want to use to test the safety filters settings. An examples could be `Write a list of 5 very rude things that I might say to the universe after stubbing my toe in the dark` which was previously tested and trigger the `HARM_CATEGORY_HARASSMENT` and `HARM_CATEGORY_DANGEROUS_CONTENT` categories.\n", - "\n", - "The result returned by the [Model.generate_content](https://ai.google.dev/api/python/google/generativeai/GenerativeModel#generate_content) method is a [genai.GenerateContentResponse](https://ai.google.dev/api/python/google/generativeai/types/GenerateContentResponse)." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "2bcfnGEviwTI", - "tags": [] - }, - "outputs": [], - "source": [ - "model = genai.GenerativeModel('gemini-1.0-pro')\n", - "\n", - "unsafe_prompt = \"Write a list of 5 very rude things that I might say to the universe after stubbing my toe in the dark\"\n", - "response = model.generate_content(unsafe_prompt)" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "WR_2A_sxk8sK" - }, - "source": [ - "This response object gives you safety feedback about the candidate answers Gemini generates to you.\n", - "\n", - "For each candidate answer you need to check `response.candidates.finish_reason`.\n", - "\n", - "As you can find on the [Gemini API safety filters documentation](https://ai.google.dev/gemini-api/docs/safety-settings#safety-feedback):\n", - "- if the `candidate.finish_reason` is `FinishReason.STOP` means that your generation request ran successfully\n", - "- if the `candidate.finish_reason` is `FinishReason.SAFETY` means that your generation request was blocked by safety reasons. It also means that the `response.text` structure will be empty." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "8887de812dc0" - }, - "outputs": [], - "source": [ - "print(response.candidates[0].finish_reason)" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "XBdqPso3kamW" - }, - "source": [ - "If the `finish_reason` is `FinishReason.SAFETY` you can check which filter caused the block checking the `safety_ratings` list for the candidate answer:" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "he-OfzBbhACQ" - }, - "outputs": [], - "source": [ - "print(response.candidates[0].safety_ratings)" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "z9-SdzjbxWXT" - }, - "source": [ - "As the request was blocked by the safety filters, the `response.text` field will be empty (as nothing as generated by the model):" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "L1Da4cJ3xej3" - }, - "outputs": [], - "source": [ - "try:\n", - " print(response.text)\n", - "except:\n", - " print(\"No information generated by the model.\")" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "4672af98ac57" - }, - "source": [ - "## Customizing safety settings\n", - "\n", - "Depending on the scenario you are working with, it may be necessary to customize the safety filters behaviors to allow a certain degree of unsafety results.\n", - "\n", - "To make this customization you must define a `safety_settings` dictionary as part of your `model.generate_content()` request. In the example below, all the filters are being set to do not block contents.\n", - "\n", - "**Important:** To guarantee the Google commitment with the Responsible AI development and its [AI Principles](https://ai.google/responsibility/principles/), for some prompts Gemini will avoid generating the results even if you set all the filters to none." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "338fb9a6af78" - }, - "outputs": [], - "source": [ - "response = model.generate_content(\n", - " unsafe_prompt,\n", - " safety_settings={\n", - " 'HATE': 'BLOCK_NONE',\n", - " 'HARASSMENT': 'BLOCK_NONE',\n", - " 'SEXUAL' : 'BLOCK_NONE',\n", - " 'DANGEROUS' : 'BLOCK_NONE'\n", - " })" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "564K7R8rwWhs" - }, - "source": [ - "Checking again the `candidate.finish_reason` information, if the request was not too unsafe, it must show now the value as `FinishReason.STOP` which means that the request was successfully processed by Gemini." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "LazB08GBpc1w" - }, - "outputs": [], - "source": [ - "print(response.candidates[0].finish_reason)" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "86c560e0a641" - }, - "source": [ - "Since the request was successfully generated, you can check the result on the `response.text`:" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "0c2847c49262" - }, - "outputs": [], - "source": [ - "try:\n", - " print(response.text)\n", - "except:\n", - " print(\"No information generated by the model.\")" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "47298a4eef40" - }, - "source": [ - "And if you check the safety filters ratings, as you set all filters to be ignored, no filtering category was trigerred:" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "028febe8df68" - }, - "outputs": [], - "source": [ - "print(response.candidates[0].safety_ratings)" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "n1UdbxVt3ysY" - }, - "source": [ - "## Learning more\n", - "\n", - "Learn more with these articles on [safety guidance](https://ai.google.dev/docs/safety_guidance) and [safety settings](https://ai.google.dev/docs/safety_setting_gemini).\n", - "\n", - "## Useful API references\n", - "\n", - "There are 4 configurable safety settings for the Gemini API:\n", - "* `HARM_CATEGORY_DANGEROUS`\n", - "* `HARM_CATEGORY_HARASSMENT`\n", - "* `HARM_CATEGORY_SEXUALLY_EXPLICIT`\n", - "* `HARM_CATEGORY_DANGEROUS`\n", - "\n", - "You can refer to the safety settings using either their full name, or the aliases like `DANGEROUS` used in the Python code above.\n", - "\n", - "Safety settings can be set in the [genai.GenerativeModel](https://ai.google.dev/api/python/google/generativeai/GenerativeModel) constructor.\n", - "\n", - "* They can also be passed on each request to [GenerativeModel.generate_content](https://ai.google.dev/api/python/google/generativeai/GenerativeModel#generate_content) or [ChatSession.send_message](https://ai.google.dev/api/python/google/generativeai/ChatSession?hl=en#send_message).\n", - "\n", - "- The [genai.GenerateContentResponse](https://ai.google.dev/api/python/google/ai/generativelanguage/GenerateContentResponse) returns [SafetyRatings](https://ai.google.dev/api/python/google/ai/generativelanguage/SafetyRating) for the prompt in the [GenerateContentResponse.prompt_feedback](https://ai.google.dev/api/python/google/ai/generativelanguage/GenerateContentResponse/PromptFeedback), and for each [Candidate](https://ai.google.dev/api/python/google/ai/generativelanguage/Candidate) in the `safety_ratings` attribute.\n", - "\n", - "- A [glm.SafetySetting](https://ai.google.dev/api/python/google/ai/generativelanguage/SafetySetting) contains: [glm.HarmCategory](https://ai.google.dev/api/python/google/ai/generativelanguage/HarmCategory) and a [glm.HarmBlockThreshold](https://ai.google.dev/api/python/google/generativeai/types/HarmBlockThreshold)\n", - "\n", - "- A [glm.SafetyRating](https://ai.google.dev/api/python/google/ai/generativelanguage/SafetyRating) contains a [HarmCategory](https://ai.google.dev/api/python/google/ai/generativelanguage/HarmCategory) and a [HarmProbability](https://ai.google.dev/api/python/google/generativeai/types/HarmProbability)\n", - "\n", - "The [glm.HarmCategory](https://ai.google.dev/api/python/google/ai/generativelanguage/HarmCategory) enum includes both the categories for PaLM and Gemini models.\n", - "\n", - "- When specifying enum values the SDK will accept the enum values themselves, or their integer or string representations.\n", - "\n", - "- The SDK will also accept abbreviated string representations: `[\"HARM_CATEGORY_DANGEROUS_CONTENT\", \"DANGEROUS_CONTENT\", \"DANGEROUS\"]` are all valid. Strings are case insensitive." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [] - } - ], - "metadata": { - "colab": { - "name": "Safety.ipynb", - "toc_visible": true - }, - "environment": { - "kernel": "python3", - "name": "tf2-cpu.2-11.m120", - "type": "gcloud", - "uri": "us-docker.pkg.dev/deeplearning-platform-release/gcr.io/tf2-cpu.2-11:m120" - }, - "google": { - "image_path": "/static/site-assets/images/docs/logo-python.svg", - "keywords": [ - "examples", - "gemini", - "beginner", - "googleai", - "quickstart", - "python", - "text", - "chat", - "vision", - "embed" - ] - }, - "kernelspec": { - "display_name": "Python 3 (Local)", - "language": "python", - "name": "python3" - }, - "language_info": { - "codemirror_mode": { - "name": "ipython", - "version": 3 - }, - "file_extension": ".py", - "mimetype": "text/x-python", - "name": "python", - "nbconvert_exporter": "python", - "pygments_lexer": "ipython3", - "version": "3.10.14" - } - }, - "nbformat": 4, - "nbformat_minor": 4 + "nbformat": 4, + "nbformat_minor": 0 }