diff --git a/quickstarts/Safety.ipynb b/quickstarts/Safety.ipynb index dbc2c211e..6ccbc13bf 100644 --- a/quickstarts/Safety.ipynb +++ b/quickstarts/Safety.ipynb @@ -61,7 +61,7 @@ "source": [ "The Gemini API has adjustable safety settings. This notebook walks you through how to use them. You'll write a prompt that's blocked, see the reason why, and then adjust the filters to unblock it.\n", "\n", - "Safety is an important topic, and you can learn more with the links at the end of this notebook. Here, we're focused on the code." + "Safety is an important topic, and you can learn more with the links at the end of this notebook. Here, you will focus on the code." ] }, { @@ -75,6 +75,17 @@ "!pip install -q -U google-generativeai # Install the Python SDK" ] }, + { + "cell_type": "markdown", + "metadata": { + "id": "3VAUtJubX7MG" + }, + "source": [ + "## Import the Gemini python SDK\n", + "\n", + "Once the kernel is restarted, you can import the Gemini SDK:" + ] + }, { "cell_type": "code", "execution_count": null, @@ -116,7 +127,9 @@ "id": "LZfoK3I3hu6V" }, "source": [ - "## Prompt Feedback\n", + "## Send your prompt request to Gemini\n", + "\n", + "Pick the prompt you want to use to test the safety filters settings. An examples could be `Write a list of 5 very rude things that I might say to the universe after stubbing my toe in the dark` which was previously tested and trigger the `HARM_CATEGORY_HARASSMENT` and `HARM_CATEGORY_DANGEROUS_CONTENT` categories.\n", "\n", "The result returned by the [Model.generate_content](https://ai.google.dev/api/python/google/generativeai/GenerativeModel#generate_content) method is a [genai.GenerateContentResponse](https://ai.google.dev/api/python/google/generativeai/types/GenerateContentResponse)." ] @@ -131,7 +144,7 @@ "source": [ "model = genai.GenerativeModel('gemini-1.0-pro')\n", "\n", - "unsafe_prompt = # Put your unsafe prompt here\n", + "unsafe_prompt = \"Write a list of 5 very rude things that I might say to the universe after stubbing my toe in the dark\"\n", "response = model.generate_content(unsafe_prompt)" ] }, @@ -141,11 +154,13 @@ "id": "WR_2A_sxk8sK" }, "source": [ - "This response object gives you safety feedback in two ways:\n", + "This response object gives you safety feedback about the candidate answers Gemini generates to you.\n", "\n", - "* The `prompt_feedback.safety_ratings` attribute contains a list of safety ratings for the input prompt.\n", + "For each candidate answer you need to check `response.candidates.finish_reason`.\n", "\n", - "* If your prompt is blocked, `prompt_feedback.block_reason` field will explain why." + "As you can find on the [Gemini API safety filters documentation](https://ai.google.dev/gemini-api/docs/safety-settings#safety-feedback):\n", + "- if the `candidate.finish_reason` is `FinishReason.STOP` means that your generation request ran successfully\n", + "- if the `candidate.finish_reason` is `FinishReason.SAFETY` means that your generation request was blocked by safety reasons. It also means that the `response.text` structure will be empty." ] }, { @@ -156,7 +171,16 @@ }, "outputs": [], "source": [ - "bool(response.prompt_feedback.block_reason)" + "print(response.candidates[0].finish_reason)" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "XBdqPso3kamW" + }, + "source": [ + "If the `finish_reason` is `FinishReason.SAFETY` you can check which filter caused the block checking the `safety_ratings` list for the candidate answer:" ] }, { @@ -167,27 +191,30 @@ }, "outputs": [], "source": [ - "response.prompt_feedback.safety_ratings" + "print(response.candidates[0].safety_ratings)" ] }, { "cell_type": "markdown", "metadata": { - "id": "72b4a8808bb9" + "id": "z9-SdzjbxWXT" }, "source": [ - "If the prompt is blocked because of the safety ratings, you will not get any candidates in the response:" + "As the request was blocked by the safety filters, the `response.text` field will be empty (as nothing as generated by the model):" ] }, { "cell_type": "code", "execution_count": null, "metadata": { - "id": "f20d9269325d" + "id": "L1Da4cJ3xej3" }, "outputs": [], "source": [ - "response.candidates" + "try:\n", + " print(response.text)\n", + "except:\n", + " print(\"No information generated by the model.\")" ] }, { @@ -196,16 +223,13 @@ "id": "4672af98ac57" }, "source": [ - "### Safety settings" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "2a6229f6d3a1" - }, - "source": [ - "Adjust the safety settings and the prompt is no longer blocked. The Gemini API has four configurable safety settings." + "## Customizing safety settings\n", + "\n", + "Depending on the scenario you are working with, it may be necessary to customize the safety filters behaviors to allow a certain degree of unsafety results.\n", + "\n", + "To make this customization you must define a `safety_settings` dictionary as part of your `model.generate_content()` request. In the example below, all the filters are being set to do not block contents.\n", + "\n", + "**Important:** To guarantee the Google commitment with the Responsible AI development and its [AI Principles](https://ai.google/responsibility/principles/), for some prompts Gemini will avoid generating the results even if you set all the filters to none." ] }, { @@ -229,113 +253,64 @@ { "cell_type": "markdown", "metadata": { - "id": "86c560e0a641" + "id": "564K7R8rwWhs" }, "source": [ - "With the new settings, the `blocked_reason` is no longer set." + "Checking again the `candidate.finish_reason` information, if the request was not too unsafe, it must show now the value as `FinishReason.STOP` which means that the request was successfully processed by Gemini." ] }, { "cell_type": "code", "execution_count": null, "metadata": { - "id": "0c2847c49262" + "id": "LazB08GBpc1w" }, "outputs": [], "source": [ - "bool(response.prompt_feedback.block_reason)" + "print(response.candidates[0].finish_reason)" ] }, { "cell_type": "markdown", "metadata": { - "id": "47298a4eef40" - }, - "source": [ - "And a candidate response is returned." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "028febe8df68" - }, - "outputs": [], - "source": [ - "len(response.candidates)" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "ujVlQoC43N3B" - }, - "source": [ - "You can check `response.text` for the response." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "de8ee74634af" - }, - "outputs": [], - "source": [ - "response.text" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "3d401c247957" - }, - "source": [ - "### Candidate ratings" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "3d306960dffb" + "id": "86c560e0a641" }, "source": [ - "For a prompt that is not blocked, the response object contains a list of `candidate` objects (just 1 for now). Each candidate includes a `finish_reason`:" + "Since the request was successfully generated, you can check the result on the `response.text`:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { - "id": "e49b53f69a2c" + "id": "0c2847c49262" }, "outputs": [], "source": [ - "candidate = response.candidates[0]\n", - "candidate.finish_reason" + "try:\n", + " print(response.text)\n", + "except:\n", + " print(\"No information generated by the model.\")" ] }, { "cell_type": "markdown", "metadata": { - "id": "badddf10089b" + "id": "47298a4eef40" }, "source": [ - "`FinishReason.STOP` means that the model finished its output normally.\n", - "\n", - "`FinishReason.SAFETY` means the candidate's `safety_ratings` exceeded the request's `safety_settings` threshold." + "And if you check the safety filters ratings, as you set all filters to be ignored, no filtering category was trigerred:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { - "id": "2b60d9f96af0" + "id": "028febe8df68" }, "outputs": [], "source": [ - "candidate.safety_ratings" + "print(response.candidates[0].safety_ratings)" ] }, { @@ -352,13 +327,11 @@ "\n", "There are 4 configurable safety settings for the Gemini API:\n", "* `HARM_CATEGORY_DANGEROUS`\n", - "*`HARM_CATEGORY_HARASSMENT`\n", + "* `HARM_CATEGORY_HARASSMENT`\n", "* `HARM_CATEGORY_SEXUALLY_EXPLICIT`\n", "* `HARM_CATEGORY_DANGEROUS`\n", "\n", - "Note: while the API [reference](https://ai.google.dev/api/python/google/ai/generativelanguage/HarmCategory) includes others, the remainder are for older models.\n", - "\n", - "* You can refer to the safety settings using either their full name, or the aliases like `DANGEROUS` used in the Python code above.\n", + "You can refer to the safety settings using either their full name, or the aliases like `DANGEROUS` used in the Python code above.\n", "\n", "Safety settings can be set in the [genai.GenerativeModel](https://ai.google.dev/api/python/google/generativeai/GenerativeModel) constructor.\n", "\n",