From c17ce69bf8b6fde73ccda96493a778a8ac7a44c5 Mon Sep 17 00:00:00 2001
From: "dependabot[bot]" <49699333+dependabot[bot]@users.noreply.github.com>
Date: Fri, 1 Nov 2024 16:45:52 -0700
Subject: [PATCH 1/9] Bump django from 5.0.8 to 5.0.9 in
/contract-tests/images/applications/django (#275)
Bumps [django](https://github.com/django/django) from 5.0.8 to 5.0.9.
Commits
8e68f93
[5.0.x] Bumped version for 5.0.9 release.
96d8404
[5.0.x] Fixed CVE-2024-45231 -- Avoided server error on password reset
when e...
813de26
[5.0.x] Fixed CVE-2024-45230 -- Mitigated potential DoS in urlize and
urlizet...
05495d4
[5.0.x] Fixed grammatical error in stub release notes for upcoming
security r...
ccd9583
[5.0.x] Added stub release notes and release date for 5.0.9 and
4.2.16.
1a5aca6
[5.0.x] Added CVE-2024-41989, CVE-2024-41990, CVE-2024-41991, and
CVE-2024-42...
4f08fae
[5.0.x] Post-release version bump.
- See full diff in compare
view
[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=django&package-manager=pip&previous-version=5.0.8&new-version=5.0.9)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
You can disable automated security fix PRs for this repo from the
[Security Alerts
page](https://github.com/aws-observability/aws-otel-python-instrumentation/network/alerts).
Signed-off-by: dependabot[bot]
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Harry
Co-authored-by: Vastin <3690049+vastin@users.noreply.github.com>
---
contract-tests/images/applications/django/requirements.txt | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/contract-tests/images/applications/django/requirements.txt b/contract-tests/images/applications/django/requirements.txt
index 4cd2fc233..0af2ec462 100644
--- a/contract-tests/images/applications/django/requirements.txt
+++ b/contract-tests/images/applications/django/requirements.txt
@@ -1,4 +1,4 @@
opentelemetry-distro==0.46b0
opentelemetry-exporter-otlp-proto-grpc==1.25.0
typing-extensions==4.9.0
-django==5.0.8
\ No newline at end of file
+django==5.0.9
\ No newline at end of file
From ca8e0e452e71ba6fb6c5e4464b5e6c063c435a7d Mon Sep 17 00:00:00 2001
From: Lei Wang <66336933+wangzlei@users.noreply.github.com>
Date: Mon, 18 Nov 2024 08:57:39 -0800
Subject: [PATCH 2/9] Lambda layer supports AWS Lambda Python 3.13 Runtime
(#293)
*Issue #, if available:*
*Description of changes:*
By submitting this pull request, I confirm that you can use, modify,
copy, and redistribute this contribution, under the terms of your
choice.
---
.github/workflows/release_lambda.yml | 42 +++++++++++-----------
lambda-layer/src/Dockerfile | 9 ++++-
lambda-layer/terraform/lambda/main.tf | 2 +-
lambda-layer/terraform/lambda/variables.tf | 10 ++----
4 files changed, 32 insertions(+), 31 deletions(-)
diff --git a/.github/workflows/release_lambda.yml b/.github/workflows/release_lambda.yml
index 74ceda1ff..01b1d2443 100644
--- a/.github/workflows/release_lambda.yml
+++ b/.github/workflows/release_lambda.yml
@@ -98,7 +98,7 @@ jobs:
aws lambda publish-layer-version \
--layer-name ${{ env.LAYER_NAME }} \
--content S3Bucket=${{ env.BUCKET_NAME }},S3Key=aws-opentelemetry-python-layer.zip \
- --compatible-runtimes python3.10 python3.11 python3.12 \
+ --compatible-runtimes python3.10 python3.11 python3.12 python3.13 \
--compatible-architectures "arm64" "x86_64" \
--license-info "Apache-2.0" \
--description "AWS Distro of OpenTelemetry Lambda Layer for Python Runtime" \
@@ -184,16 +184,16 @@ jobs:
with:
name: layer.tf
path: layer.tf
- - name: Commit changes
- env:
- GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
- run: |
- git config user.name "github-actions[bot]"
- git config user.email "github-actions[bot]@users.noreply.github.com"
- mv layer.tf lambda-layer/terraform/lambda/
- git add lambda-layer/terraform/lambda/layer.tf
- git commit -m "Update Lambda layer ARNs for releasing" || echo "No changes to commit"
- git push
+# - name: Commit changes
+# env:
+# GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
+# run: |
+# git config user.name "github-actions[bot]"
+# git config user.email "github-actions[bot]@users.noreply.github.com"
+# mv layer.tf lambda-layer/terraform/lambda/
+# git add lambda-layer/terraform/lambda/layer.tf
+# git commit -m "Update Lambda layer ARNs for releasing" || echo "No changes to commit"
+# git push
create-release:
runs-on: ubuntu-latest
needs: generate-release-note
@@ -205,16 +205,16 @@ jobs:
echo "COMMIT_SHA=${GITHUB_SHA}" >> $GITHUB_ENV
SHORT_SHA=$(echo $GITHUB_SHA | cut -c1-7)
echo "SHORT_SHA=${SHORT_SHA}" >> $GITHUB_ENV
- - name: Create Tag
- run: |
- git config user.name "github-actions[bot]"
- git config user.email "github-actions[bot]@users.noreply.github.com"
- TAG_NAME="lambda-${SHORT_SHA}"
- git tag -a "$TAG_NAME" -m "Release Lambda layer based on commit $TAG_NAME"
- git push origin "$TAG_NAME"
- echo "TAG_NAME=${TAG_NAME}" >> $GITHUB_ENV
- env:
- GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
+# - name: Create Tag
+# run: |
+# git config user.name "github-actions[bot]"
+# git config user.email "github-actions[bot]@users.noreply.github.com"
+# TAG_NAME="lambda-${SHORT_SHA}"
+# git tag -a "$TAG_NAME" -m "Release Lambda layer based on commit $TAG_NAME"
+# git push origin "$TAG_NAME"
+# echo "TAG_NAME=${TAG_NAME}" >> $GITHUB_ENV
+# env:
+# GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
- name: Create Release
id: create_release
uses: actions/create-release@v1
diff --git a/lambda-layer/src/Dockerfile b/lambda-layer/src/Dockerfile
index ae13dc5ae..8a4f1f328 100644
--- a/lambda-layer/src/Dockerfile
+++ b/lambda-layer/src/Dockerfile
@@ -1,4 +1,4 @@
-FROM public.ecr.aws/sam/build-python3.12 AS python312
+FROM public.ecr.aws/sam/build-python3.13 AS python313
ADD . /workspace
@@ -19,6 +19,13 @@ RUN mkdir -p /build && \
chmod 755 /build/otel-instrument && \
rm -rf /build/python/urllib3*
+FROM public.ecr.aws/sam/build-python3.12 AS python312
+
+WORKDIR /workspace
+
+COPY --from=python313 /build /build
+
+RUN python3 -m compileall /build/python
FROM public.ecr.aws/sam/build-python3.11 AS python311
diff --git a/lambda-layer/terraform/lambda/main.tf b/lambda-layer/terraform/lambda/main.tf
index 49cdb762c..be81aac34 100644
--- a/lambda-layer/terraform/lambda/main.tf
+++ b/lambda-layer/terraform/lambda/main.tf
@@ -5,7 +5,7 @@ locals {
resource "aws_lambda_layer_version" "sdk_layer" {
layer_name = var.sdk_layer_name
filename = "${path.module}/../../src/build/aws-opentelemetry-python-layer.zip"
- compatible_runtimes = ["python3.10", "python3.11", "python3.12"]
+ compatible_runtimes = ["python3.10", "python3.11", "python3.12", "python3.13"]
license_info = "Apache-2.0"
source_code_hash = filebase64sha256("${path.module}/../../src/build/aws-opentelemetry-python-layer.zip")
}
diff --git a/lambda-layer/terraform/lambda/variables.tf b/lambda-layer/terraform/lambda/variables.tf
index 7f1c5386e..8fdb7193b 100644
--- a/lambda-layer/terraform/lambda/variables.tf
+++ b/lambda-layer/terraform/lambda/variables.tf
@@ -1,7 +1,7 @@
variable "sdk_layer_name" {
type = string
description = "Name of published SDK layer"
- default = "aws-opentelemetry-distro-python"
+ default = "AWSOpenTelemetryDistroPython"
}
variable "function_name" {
@@ -19,7 +19,7 @@ variable "architecture" {
variable "runtime" {
type = string
description = "Python runtime version used for sample Lambda Function"
- default = "python3.12"
+ default = "python3.13"
}
variable "tracing_mode" {
@@ -27,9 +27,3 @@ variable "tracing_mode" {
description = "Lambda function tracing mode"
default = "Active"
}
-
-variable "enable_collector_layer" {
- type = bool
- description = "Enables building and usage of a layer for the collector. If false, it means either the SDK layer includes the collector or it is not used."
- default = false
-}
From 39bcbc6bc1f6b14b02575447a751a63b1613fe0a Mon Sep 17 00:00:00 2001
From: Lei Wang <66336933+wangzlei@users.noreply.github.com>
Date: Tue, 19 Nov 2024 15:22:47 -0800
Subject: [PATCH 3/9] Generate lambda layer release (#294)
*Issue #, if available:*
*Description of changes:*
Create github release for releasing lambda layer, the tag name is
"lambda-v", the version follows the SDK version.
By submitting this pull request, I confirm that you can use, modify,
copy, and redistribute this contribution, under the terms of your
choice.
---
.github/workflows/release_lambda.yml | 51 ++++++----------------------
1 file changed, 11 insertions(+), 40 deletions(-)
diff --git a/.github/workflows/release_lambda.yml b/.github/workflows/release_lambda.yml
index 01b1d2443..3e02b0b35 100644
--- a/.github/workflows/release_lambda.yml
+++ b/.github/workflows/release_lambda.yml
@@ -3,6 +3,9 @@ name: Release Lambda layer
on:
workflow_dispatch:
inputs:
+ version:
+ description: The version to tag the lambda release with, e.g., 1.2.0
+ required: true
aws_region:
description: 'Deploy to aws regions'
required: true
@@ -184,45 +187,13 @@ jobs:
with:
name: layer.tf
path: layer.tf
-# - name: Commit changes
-# env:
-# GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
-# run: |
-# git config user.name "github-actions[bot]"
-# git config user.email "github-actions[bot]@users.noreply.github.com"
-# mv layer.tf lambda-layer/terraform/lambda/
-# git add lambda-layer/terraform/lambda/layer.tf
-# git commit -m "Update Lambda layer ARNs for releasing" || echo "No changes to commit"
-# git push
- create-release:
- runs-on: ubuntu-latest
- needs: generate-release-note
- steps:
- - name: Checkout Repo @ SHA - ${{ github.sha }}
- uses: actions/checkout@v4
- - name: Get latest commit SHA
- run: |
- echo "COMMIT_SHA=${GITHUB_SHA}" >> $GITHUB_ENV
- SHORT_SHA=$(echo $GITHUB_SHA | cut -c1-7)
- echo "SHORT_SHA=${SHORT_SHA}" >> $GITHUB_ENV
-# - name: Create Tag
-# run: |
-# git config user.name "github-actions[bot]"
-# git config user.email "github-actions[bot]@users.noreply.github.com"
-# TAG_NAME="lambda-${SHORT_SHA}"
-# git tag -a "$TAG_NAME" -m "Release Lambda layer based on commit $TAG_NAME"
-# git push origin "$TAG_NAME"
-# echo "TAG_NAME=${TAG_NAME}" >> $GITHUB_ENV
-# env:
-# GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
- - name: Create Release
+ - name: Create GH release
id: create_release
- uses: actions/create-release@v1
env:
- GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
- with:
- tag_name: ${{ env.TAG_NAME }}
- release_name: "Release AWSOpenTelemetryDistroPython Lambda Layer"
- body_path: lambda-layer/terraform/lambda/layer.tf
- draft: true
- prerelease: false
+ GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} # This token is provided by Actions, you do not need to create your own token
+ run: |
+ gh release create --target "$GITHUB_REF_NAME" \
+ --title "Release lambda-v${{ github.event.inputs.version }}" \
+ --draft \
+ "lambda-v${{ github.event.inputs.version }}" \
+ layer.tf
From 3b378a6b4cf407e3c897186018b7174ea1e20f2c Mon Sep 17 00:00:00 2001
From: Lei Wang <66336933+wangzlei@users.noreply.github.com>
Date: Thu, 21 Nov 2024 15:02:15 -0800
Subject: [PATCH 4/9] Update Lambda README.md, point users to AWS public
documentation (#295)
*Issue #, if available:*
*Description of changes:*
By submitting this pull request, I confirm that you can use, modify,
copy, and redistribute this contribution, under the terms of your
choice.
---
lambda-layer/README.md | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/lambda-layer/README.md b/lambda-layer/README.md
index 5eb0fab46..18ea6d55b 100644
--- a/lambda-layer/README.md
+++ b/lambda-layer/README.md
@@ -1,6 +1,6 @@
# AWS Lambda Application Signals Support
-This package provides support for **Application Signals** in AWS Lambda environment.
+This folder provides support for **Application Signals** in AWS Lambda environments. You can explore this repository to learn how to build a Lambda layer for AWS Python Runtimes from scratch in your AWS account. Alternatively, you can directly visit the AWS documentation, [Enable Application Signals on Lambda functions](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch-Application-Signals-Enable-Lambda.html), and use the AWS-managed Lambda layers we provide.
## Features
@@ -53,4 +53,4 @@ Lambda function and view the traces and metrics through the AWS CloudWatch Conso
By default the layer enable botocore and aws-lambda instrumentation libraries only for better Lambda cold start performance. To
enable all opentelemetry python
supported libraries you can set environment variable `OTEL_PYTHON_DISABLED_INSTRUMENTATIONS=none`. Refer to details in
-[OpenTelemetry Python Disabling Specific Instrumentations](Disabling Specific Instrumentations)
\ No newline at end of file
+[OpenTelemetry Python Disabling Specific Instrumentations](https://opentelemetry.io/docs/zero-code/python/configuration/#disabling-specific-instrumentations)
From d305721e166e08af884ca6068a89b8eef8bb1b84 Mon Sep 17 00:00:00 2001
From: Jeel-mehta <72543735+Jeel-mehta@users.noreply.github.com>
Date: Thu, 21 Nov 2024 16:05:51 -0800
Subject: [PATCH 5/9] Gen-AI python implementation (#290)
*Issue #, if available:*
*Description of changes:*
By submitting this pull request, I confirm that you can use, modify,
copy, and redistribute this contribution, under the terms of your
choice.
---------
Co-authored-by: Jeel Mehta
Co-authored-by: Michael He <53622546+yiyuan-he@users.noreply.github.com>
Co-authored-by: Min Xia
---
.../distro/_aws_span_processing_util.py | 6 +
.../distro/patches/_bedrock_patches.py | 188 ++++++++++++++-
.../distro/test_instrumentation_patch.py | 222 +++++++++++++++++-
3 files changed, 404 insertions(+), 12 deletions(-)
diff --git a/aws-opentelemetry-distro/src/amazon/opentelemetry/distro/_aws_span_processing_util.py b/aws-opentelemetry-distro/src/amazon/opentelemetry/distro/_aws_span_processing_util.py
index 082c2de5c..24aaa68dc 100644
--- a/aws-opentelemetry-distro/src/amazon/opentelemetry/distro/_aws_span_processing_util.py
+++ b/aws-opentelemetry-distro/src/amazon/opentelemetry/distro/_aws_span_processing_util.py
@@ -29,6 +29,12 @@
# TODO: Use Semantic Conventions once upgrade to 0.47b0
GEN_AI_REQUEST_MODEL: str = "gen_ai.request.model"
GEN_AI_SYSTEM: str = "gen_ai.system"
+GEN_AI_REQUEST_MAX_TOKENS: str = "gen_ai.request.max_tokens"
+GEN_AI_REQUEST_TEMPERATURE: str = "gen_ai.request.temperature"
+GEN_AI_REQUEST_TOP_P: str = "gen_ai.request.top_p"
+GEN_AI_RESPONSE_FINISH_REASONS: str = "gen_ai.response.finish_reasons"
+GEN_AI_USAGE_INPUT_TOKENS: str = "gen_ai.usage.input_tokens"
+GEN_AI_USAGE_OUTPUT_TOKENS: str = "gen_ai.usage.output_tokens"
# Get dialect keywords retrieved from dialect_keywords.json file.
diff --git a/aws-opentelemetry-distro/src/amazon/opentelemetry/distro/patches/_bedrock_patches.py b/aws-opentelemetry-distro/src/amazon/opentelemetry/distro/patches/_bedrock_patches.py
index 581ca36f4..4a6eb10f5 100644
--- a/aws-opentelemetry-distro/src/amazon/opentelemetry/distro/patches/_bedrock_patches.py
+++ b/aws-opentelemetry-distro/src/amazon/opentelemetry/distro/patches/_bedrock_patches.py
@@ -2,7 +2,13 @@
# SPDX-License-Identifier: Apache-2.0
import abc
import inspect
-from typing import Dict, Optional
+import io
+import json
+import logging
+import math
+from typing import Any, Dict, Optional
+
+from botocore.response import StreamingBody
from amazon.opentelemetry.distro._aws_attribute_keys import (
AWS_BEDROCK_AGENT_ID,
@@ -11,7 +17,16 @@
AWS_BEDROCK_GUARDRAIL_ID,
AWS_BEDROCK_KNOWLEDGE_BASE_ID,
)
-from amazon.opentelemetry.distro._aws_span_processing_util import GEN_AI_REQUEST_MODEL, GEN_AI_SYSTEM
+from amazon.opentelemetry.distro._aws_span_processing_util import (
+ GEN_AI_REQUEST_MAX_TOKENS,
+ GEN_AI_REQUEST_MODEL,
+ GEN_AI_REQUEST_TEMPERATURE,
+ GEN_AI_REQUEST_TOP_P,
+ GEN_AI_RESPONSE_FINISH_REASONS,
+ GEN_AI_SYSTEM,
+ GEN_AI_USAGE_INPUT_TOKENS,
+ GEN_AI_USAGE_OUTPUT_TOKENS,
+)
from opentelemetry.instrumentation.botocore.extensions.types import (
_AttributeMapT,
_AwsSdkCallContext,
@@ -28,6 +43,10 @@
_MODEL_ID: str = "modelId"
_AWS_BEDROCK_SYSTEM: str = "aws_bedrock"
+_logger = logging.getLogger(__name__)
+# Set logger level to DEBUG
+_logger.setLevel(logging.DEBUG)
+
class _BedrockAgentOperation(abc.ABC):
"""
@@ -240,3 +259,168 @@ def extract_attributes(self, attributes: _AttributeMapT):
model_id = self._call_context.params.get(_MODEL_ID)
if model_id:
attributes[GEN_AI_REQUEST_MODEL] = model_id
+
+ # Get the request body if it exists
+ body = self._call_context.params.get("body")
+ if body:
+ try:
+ request_body = json.loads(body)
+
+ if "amazon.titan" in model_id:
+ self._extract_titan_attributes(attributes, request_body)
+ elif "anthropic.claude" in model_id:
+ self._extract_claude_attributes(attributes, request_body)
+ elif "meta.llama" in model_id:
+ self._extract_llama_attributes(attributes, request_body)
+ elif "cohere.command" in model_id:
+ self._extract_cohere_attributes(attributes, request_body)
+ elif "ai21.jamba" in model_id:
+ self._extract_ai21_attributes(attributes, request_body)
+ elif "mistral" in model_id:
+ self._extract_mistral_attributes(attributes, request_body)
+
+ except json.JSONDecodeError:
+ _logger.debug("Error: Unable to parse the body as JSON")
+
+ def _extract_titan_attributes(self, attributes, request_body):
+ config = request_body.get("textGenerationConfig", {})
+ self._set_if_not_none(attributes, GEN_AI_REQUEST_TEMPERATURE, config.get("temperature"))
+ self._set_if_not_none(attributes, GEN_AI_REQUEST_TOP_P, config.get("topP"))
+ self._set_if_not_none(attributes, GEN_AI_REQUEST_MAX_TOKENS, config.get("maxTokenCount"))
+
+ def _extract_claude_attributes(self, attributes, request_body):
+ self._set_if_not_none(attributes, GEN_AI_REQUEST_MAX_TOKENS, request_body.get("max_tokens"))
+ self._set_if_not_none(attributes, GEN_AI_REQUEST_TEMPERATURE, request_body.get("temperature"))
+ self._set_if_not_none(attributes, GEN_AI_REQUEST_TOP_P, request_body.get("top_p"))
+
+ def _extract_cohere_attributes(self, attributes, request_body):
+ prompt = request_body.get("message")
+ if prompt:
+ attributes[GEN_AI_USAGE_INPUT_TOKENS] = math.ceil(len(prompt) / 6)
+ self._set_if_not_none(attributes, GEN_AI_REQUEST_MAX_TOKENS, request_body.get("max_tokens"))
+ self._set_if_not_none(attributes, GEN_AI_REQUEST_TEMPERATURE, request_body.get("temperature"))
+ self._set_if_not_none(attributes, GEN_AI_REQUEST_TOP_P, request_body.get("p"))
+
+ def _extract_ai21_attributes(self, attributes, request_body):
+ self._set_if_not_none(attributes, GEN_AI_REQUEST_MAX_TOKENS, request_body.get("max_tokens"))
+ self._set_if_not_none(attributes, GEN_AI_REQUEST_TEMPERATURE, request_body.get("temperature"))
+ self._set_if_not_none(attributes, GEN_AI_REQUEST_TOP_P, request_body.get("top_p"))
+
+ def _extract_llama_attributes(self, attributes, request_body):
+ self._set_if_not_none(attributes, GEN_AI_REQUEST_MAX_TOKENS, request_body.get("max_gen_len"))
+ self._set_if_not_none(attributes, GEN_AI_REQUEST_TEMPERATURE, request_body.get("temperature"))
+ self._set_if_not_none(attributes, GEN_AI_REQUEST_TOP_P, request_body.get("top_p"))
+
+ def _extract_mistral_attributes(self, attributes, request_body):
+ prompt = request_body.get("prompt")
+ if prompt:
+ attributes[GEN_AI_USAGE_INPUT_TOKENS] = math.ceil(len(prompt) / 6)
+ self._set_if_not_none(attributes, GEN_AI_REQUEST_MAX_TOKENS, request_body.get("max_tokens"))
+ self._set_if_not_none(attributes, GEN_AI_REQUEST_TEMPERATURE, request_body.get("temperature"))
+ self._set_if_not_none(attributes, GEN_AI_REQUEST_TOP_P, request_body.get("top_p"))
+
+ @staticmethod
+ def _set_if_not_none(attributes, key, value):
+ if value is not None:
+ attributes[key] = value
+
+ def on_success(self, span: Span, result: Dict[str, Any]):
+ model_id = self._call_context.params.get(_MODEL_ID)
+
+ if not model_id:
+ return
+
+ if "body" in result and isinstance(result["body"], StreamingBody):
+ original_body = None
+ try:
+ original_body = result["body"]
+ body_content = original_body.read()
+
+ # Use one stream for telemetry
+ stream = io.BytesIO(body_content)
+ telemetry_content = stream.read()
+ response_body = json.loads(telemetry_content.decode("utf-8"))
+ if "amazon.titan" in model_id:
+ self._handle_amazon_titan_response(span, response_body)
+ elif "anthropic.claude" in model_id:
+ self._handle_anthropic_claude_response(span, response_body)
+ elif "meta.llama" in model_id:
+ self._handle_meta_llama_response(span, response_body)
+ elif "cohere.command" in model_id:
+ self._handle_cohere_command_response(span, response_body)
+ elif "ai21.jamba" in model_id:
+ self._handle_ai21_jamba_response(span, response_body)
+ elif "mistral" in model_id:
+ self._handle_mistral_mistral_response(span, response_body)
+ # Replenish stream for downstream application use
+ new_stream = io.BytesIO(body_content)
+ result["body"] = StreamingBody(new_stream, len(body_content))
+
+ except json.JSONDecodeError:
+ _logger.debug("Error: Unable to parse the response body as JSON")
+ except Exception as e: # pylint: disable=broad-exception-caught, invalid-name
+ _logger.debug("Error processing response: %s", e)
+ finally:
+ if original_body is not None:
+ original_body.close()
+
+ # pylint: disable=no-self-use
+ def _handle_amazon_titan_response(self, span: Span, response_body: Dict[str, Any]):
+ if "inputTextTokenCount" in response_body:
+ span.set_attribute(GEN_AI_USAGE_INPUT_TOKENS, response_body["inputTextTokenCount"])
+ if "results" in response_body and response_body["results"]:
+ result = response_body["results"][0]
+ if "tokenCount" in result:
+ span.set_attribute(GEN_AI_USAGE_OUTPUT_TOKENS, result["tokenCount"])
+ if "completionReason" in result:
+ span.set_attribute(GEN_AI_RESPONSE_FINISH_REASONS, [result["completionReason"]])
+
+ # pylint: disable=no-self-use
+ def _handle_anthropic_claude_response(self, span: Span, response_body: Dict[str, Any]):
+ if "usage" in response_body:
+ usage = response_body["usage"]
+ if "input_tokens" in usage:
+ span.set_attribute(GEN_AI_USAGE_INPUT_TOKENS, usage["input_tokens"])
+ if "output_tokens" in usage:
+ span.set_attribute(GEN_AI_USAGE_OUTPUT_TOKENS, usage["output_tokens"])
+ if "stop_reason" in response_body:
+ span.set_attribute(GEN_AI_RESPONSE_FINISH_REASONS, [response_body["stop_reason"]])
+
+ # pylint: disable=no-self-use
+ def _handle_cohere_command_response(self, span: Span, response_body: Dict[str, Any]):
+ # Output tokens: Approximate from the response text
+ if "text" in response_body:
+ span.set_attribute(GEN_AI_USAGE_OUTPUT_TOKENS, math.ceil(len(response_body["text"]) / 6))
+ if "finish_reason" in response_body:
+ span.set_attribute(GEN_AI_RESPONSE_FINISH_REASONS, [response_body["finish_reason"]])
+
+ # pylint: disable=no-self-use
+ def _handle_ai21_jamba_response(self, span: Span, response_body: Dict[str, Any]):
+ if "usage" in response_body:
+ usage = response_body["usage"]
+ if "prompt_tokens" in usage:
+ span.set_attribute(GEN_AI_USAGE_INPUT_TOKENS, usage["prompt_tokens"])
+ if "completion_tokens" in usage:
+ span.set_attribute(GEN_AI_USAGE_OUTPUT_TOKENS, usage["completion_tokens"])
+ if "choices" in response_body:
+ choices = response_body["choices"][0]
+ if "finish_reason" in choices:
+ span.set_attribute(GEN_AI_RESPONSE_FINISH_REASONS, [choices["finish_reason"]])
+
+ # pylint: disable=no-self-use
+ def _handle_meta_llama_response(self, span: Span, response_body: Dict[str, Any]):
+ if "prompt_token_count" in response_body:
+ span.set_attribute(GEN_AI_USAGE_INPUT_TOKENS, response_body["prompt_token_count"])
+ if "generation_token_count" in response_body:
+ span.set_attribute(GEN_AI_USAGE_OUTPUT_TOKENS, response_body["generation_token_count"])
+ if "stop_reason" in response_body:
+ span.set_attribute(GEN_AI_RESPONSE_FINISH_REASONS, [response_body["stop_reason"]])
+
+ # pylint: disable=no-self-use
+ def _handle_mistral_mistral_response(self, span: Span, response_body: Dict[str, Any]):
+ if "outputs" in response_body:
+ outputs = response_body["outputs"][0]
+ if "text" in outputs:
+ span.set_attribute(GEN_AI_USAGE_OUTPUT_TOKENS, math.ceil(len(outputs["text"]) / 6))
+ if "stop_reason" in outputs:
+ span.set_attribute(GEN_AI_RESPONSE_FINISH_REASONS, [outputs["stop_reason"]])
diff --git a/aws-opentelemetry-distro/tests/amazon/opentelemetry/distro/test_instrumentation_patch.py b/aws-opentelemetry-distro/tests/amazon/opentelemetry/distro/test_instrumentation_patch.py
index b27d5e799..86c6bc39f 100644
--- a/aws-opentelemetry-distro/tests/amazon/opentelemetry/distro/test_instrumentation_patch.py
+++ b/aws-opentelemetry-distro/tests/amazon/opentelemetry/distro/test_instrumentation_patch.py
@@ -1,12 +1,16 @@
# Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
# SPDX-License-Identifier: Apache-2.0
+import json
+import math
import os
+from io import BytesIO
from typing import Any, Dict
from unittest import TestCase
from unittest.mock import MagicMock, patch
import gevent.monkey
import pkg_resources
+from botocore.response import StreamingBody
from amazon.opentelemetry.distro.patches._instrumentation_patch import (
AWS_GEVENT_PATCH_MODULES,
@@ -173,7 +177,7 @@ def _test_unpatched_gevent_instrumentation(self):
self.assertFalse(gevent.monkey.is_module_patched("queue"), "gevent queue module has been patched")
self.assertFalse(gevent.monkey.is_module_patched("contextvars"), "gevent contextvars module has been patched")
- # pylint: disable=too-many-statements
+ # pylint: disable=too-many-statements, too-many-locals
def _test_patched_botocore_instrumentation(self):
# Kinesis
self.assertTrue("kinesis" in _KNOWN_EXTENSIONS)
@@ -211,12 +215,209 @@ def _test_patched_botocore_instrumentation(self):
bedrock_agent_runtime_sucess_attributes: Dict[str, str] = _do_on_success_bedrock("bedrock-agent-runtime")
self.assertEqual(len(bedrock_agent_runtime_sucess_attributes), 0)
- # BedrockRuntime
+ # BedrockRuntime - Amazon Titan Models
self.assertTrue("bedrock-runtime" in _KNOWN_EXTENSIONS)
- bedrock_runtime_attributes: Dict[str, str] = _do_extract_attributes_bedrock("bedrock-runtime")
- self.assertEqual(len(bedrock_runtime_attributes), 2)
+ request_body = {
+ "textGenerationConfig": {
+ "maxTokenCount": 512,
+ "temperature": 0.9,
+ "topP": 0.75,
+ }
+ }
+ bedrock_runtime_attributes: Dict[str, str] = _do_extract_attributes_bedrock(
+ "bedrock-runtime", model_id="amazon.titan", request_body=json.dumps(request_body)
+ )
+ self.assertEqual(len(bedrock_runtime_attributes), 5)
self.assertEqual(bedrock_runtime_attributes["gen_ai.system"], _GEN_AI_SYSTEM)
- self.assertEqual(bedrock_runtime_attributes["gen_ai.request.model"], _GEN_AI_REQUEST_MODEL)
+ self.assertEqual(bedrock_runtime_attributes["gen_ai.request.model"], "amazon.titan")
+ self.assertEqual(bedrock_runtime_attributes["gen_ai.request.max_tokens"], 512)
+ self.assertEqual(bedrock_runtime_attributes["gen_ai.request.temperature"], 0.9)
+ self.assertEqual(bedrock_runtime_attributes["gen_ai.request.top_p"], 0.75)
+ response_body = {
+ "inputTextTokenCount": 123,
+ "results": [
+ {
+ "tokenCount": 456,
+ "outputText": "testing",
+ "completionReason": "FINISH",
+ }
+ ],
+ }
+ json_bytes = json.dumps(response_body).encode("utf-8")
+ body_bytes = BytesIO(json_bytes)
+ streaming_body = StreamingBody(body_bytes, len(json_bytes))
+ bedrock_runtime_success_attributes: Dict[str, str] = _do_on_success_bedrock(
+ "bedrock-runtime", model_id="amazon.titan", streaming_body=streaming_body
+ )
+ self.assertEqual(bedrock_runtime_success_attributes["gen_ai.usage.input_tokens"], 123)
+ self.assertEqual(bedrock_runtime_success_attributes["gen_ai.usage.output_tokens"], 456)
+ self.assertEqual(bedrock_runtime_success_attributes["gen_ai.response.finish_reasons"], ["FINISH"])
+
+ # BedrockRuntime - Anthropic Claude Models
+
+ self.assertTrue("bedrock-runtime" in _KNOWN_EXTENSIONS)
+ request_body = {
+ "anthropic_version": "bedrock-2023-05-31",
+ "max_tokens": 512,
+ "temperature": 0.5,
+ "top_p": 0.999,
+ }
+
+ bedrock_runtime_attributes: Dict[str, str] = _do_extract_attributes_bedrock(
+ "bedrock-runtime", model_id="anthropic.claude", request_body=json.dumps(request_body)
+ )
+ self.assertEqual(len(bedrock_runtime_attributes), 5)
+ self.assertEqual(bedrock_runtime_attributes["gen_ai.system"], _GEN_AI_SYSTEM)
+ self.assertEqual(bedrock_runtime_attributes["gen_ai.request.model"], "anthropic.claude")
+ self.assertEqual(bedrock_runtime_attributes["gen_ai.request.max_tokens"], 512)
+ self.assertEqual(bedrock_runtime_attributes["gen_ai.request.temperature"], 0.5)
+ self.assertEqual(bedrock_runtime_attributes["gen_ai.request.top_p"], 0.999)
+ response_body = {
+ "stop_reason": "end_turn",
+ "stop_sequence": None,
+ "usage": {"input_tokens": 23, "output_tokens": 36},
+ }
+ json_bytes = json.dumps(response_body).encode("utf-8")
+ body_bytes = BytesIO(json_bytes)
+ streaming_body = StreamingBody(body_bytes, len(json_bytes))
+ bedrock_runtime_success_attributes: Dict[str, str] = _do_on_success_bedrock(
+ "bedrock-runtime", model_id="anthropic.claude", streaming_body=streaming_body
+ )
+ self.assertEqual(bedrock_runtime_success_attributes["gen_ai.usage.input_tokens"], 23)
+ self.assertEqual(bedrock_runtime_success_attributes["gen_ai.usage.output_tokens"], 36)
+ self.assertEqual(bedrock_runtime_success_attributes["gen_ai.response.finish_reasons"], ["end_turn"])
+
+ # BedrockRuntime - Cohere Command Models
+ self.assertTrue("bedrock-runtime" in _KNOWN_EXTENSIONS)
+ request_body = {
+ "message": "Hello, world",
+ "max_tokens": 512,
+ "temperature": 0.5,
+ "p": 0.75,
+ }
+
+ bedrock_runtime_attributes: Dict[str, str] = _do_extract_attributes_bedrock(
+ "bedrock-runtime", model_id="cohere.command", request_body=json.dumps(request_body)
+ )
+ self.assertEqual(len(bedrock_runtime_attributes), 6)
+ self.assertEqual(bedrock_runtime_attributes["gen_ai.system"], _GEN_AI_SYSTEM)
+ self.assertEqual(bedrock_runtime_attributes["gen_ai.request.model"], "cohere.command")
+ self.assertEqual(bedrock_runtime_attributes["gen_ai.request.max_tokens"], 512)
+ self.assertEqual(bedrock_runtime_attributes["gen_ai.request.temperature"], 0.5)
+ self.assertEqual(bedrock_runtime_attributes["gen_ai.request.top_p"], 0.75)
+ self.assertEqual(
+ bedrock_runtime_attributes["gen_ai.usage.input_tokens"], math.ceil(len(request_body["message"]) / 6)
+ )
+ response_body = {
+ "text": "Goodbye, world",
+ "finish_reason": "COMPLETE",
+ }
+ json_bytes = json.dumps(response_body).encode("utf-8")
+ body_bytes = BytesIO(json_bytes)
+ streaming_body = StreamingBody(body_bytes, len(json_bytes))
+ bedrock_runtime_success_attributes: Dict[str, str] = _do_on_success_bedrock(
+ "bedrock-runtime", model_id="cohere.command", streaming_body=streaming_body
+ )
+ self.assertEqual(
+ bedrock_runtime_success_attributes["gen_ai.usage.output_tokens"], math.ceil(len(response_body["text"]) / 6)
+ )
+ self.assertEqual(bedrock_runtime_success_attributes["gen_ai.response.finish_reasons"], ["COMPLETE"])
+
+ # BedrockRuntime - AI21 Jamba Models
+ self.assertTrue("bedrock-runtime" in _KNOWN_EXTENSIONS)
+ request_body = {
+ "max_tokens": 512,
+ "temperature": 0.5,
+ "top_p": 0.9,
+ }
+
+ bedrock_runtime_attributes: Dict[str, str] = _do_extract_attributes_bedrock(
+ "bedrock-runtime", model_id="ai21.jamba", request_body=json.dumps(request_body)
+ )
+ self.assertEqual(len(bedrock_runtime_attributes), 5)
+ self.assertEqual(bedrock_runtime_attributes["gen_ai.system"], _GEN_AI_SYSTEM)
+ self.assertEqual(bedrock_runtime_attributes["gen_ai.request.model"], "ai21.jamba")
+ self.assertEqual(bedrock_runtime_attributes["gen_ai.request.max_tokens"], 512)
+ self.assertEqual(bedrock_runtime_attributes["gen_ai.request.temperature"], 0.5)
+ self.assertEqual(bedrock_runtime_attributes["gen_ai.request.top_p"], 0.9)
+ response_body = {
+ "choices": [{"finish_reason": "stop"}],
+ "usage": {"prompt_tokens": 24, "completion_tokens": 31, "total_tokens": 55},
+ }
+ json_bytes = json.dumps(response_body).encode("utf-8")
+ body_bytes = BytesIO(json_bytes)
+ streaming_body = StreamingBody(body_bytes, len(json_bytes))
+ bedrock_runtime_success_attributes: Dict[str, str] = _do_on_success_bedrock(
+ "bedrock-runtime", model_id="ai21.jamba", streaming_body=streaming_body
+ )
+ self.assertEqual(bedrock_runtime_success_attributes["gen_ai.usage.input_tokens"], 24)
+ self.assertEqual(bedrock_runtime_success_attributes["gen_ai.usage.output_tokens"], 31)
+ self.assertEqual(bedrock_runtime_success_attributes["gen_ai.response.finish_reasons"], ["stop"])
+
+ # BedrockRuntime - Meta LLama Models
+ self.assertTrue("bedrock-runtime" in _KNOWN_EXTENSIONS)
+ request_body = {
+ "max_gen_len": 512,
+ "temperature": 0.5,
+ "top_p": 0.9,
+ }
+
+ bedrock_runtime_attributes: Dict[str, str] = _do_extract_attributes_bedrock(
+ "bedrock-runtime", model_id="meta.llama", request_body=json.dumps(request_body)
+ )
+ self.assertEqual(len(bedrock_runtime_attributes), 5)
+ self.assertEqual(bedrock_runtime_attributes["gen_ai.system"], _GEN_AI_SYSTEM)
+ self.assertEqual(bedrock_runtime_attributes["gen_ai.request.model"], "meta.llama")
+ self.assertEqual(bedrock_runtime_attributes["gen_ai.request.max_tokens"], 512)
+ self.assertEqual(bedrock_runtime_attributes["gen_ai.request.temperature"], 0.5)
+ self.assertEqual(bedrock_runtime_attributes["gen_ai.request.top_p"], 0.9)
+ response_body = {"prompt_token_count": 31, "generation_token_count": 36, "stop_reason": "stop"}
+ json_bytes = json.dumps(response_body).encode("utf-8")
+ body_bytes = BytesIO(json_bytes)
+ streaming_body = StreamingBody(body_bytes, len(json_bytes))
+ bedrock_runtime_success_attributes: Dict[str, str] = _do_on_success_bedrock(
+ "bedrock-runtime", model_id="meta.llama", streaming_body=streaming_body
+ )
+ self.assertEqual(bedrock_runtime_success_attributes["gen_ai.usage.input_tokens"], 31)
+ self.assertEqual(bedrock_runtime_success_attributes["gen_ai.usage.output_tokens"], 36)
+ self.assertEqual(bedrock_runtime_success_attributes["gen_ai.response.finish_reasons"], ["stop"])
+
+ # BedrockRuntime - Mistral Models
+ self.assertTrue("bedrock-runtime" in _KNOWN_EXTENSIONS)
+ msg = "Hello, World"
+ formatted_prompt = f"[INST] {msg} [/INST]"
+ request_body = {
+ "prompt": formatted_prompt,
+ "max_tokens": 512,
+ "temperature": 0.5,
+ "top_p": 0.9,
+ }
+
+ bedrock_runtime_attributes: Dict[str, str] = _do_extract_attributes_bedrock(
+ "bedrock-runtime", model_id="mistral", request_body=json.dumps(request_body)
+ )
+ self.assertEqual(len(bedrock_runtime_attributes), 6)
+ self.assertEqual(bedrock_runtime_attributes["gen_ai.system"], _GEN_AI_SYSTEM)
+ self.assertEqual(bedrock_runtime_attributes["gen_ai.request.model"], "mistral")
+ self.assertEqual(bedrock_runtime_attributes["gen_ai.request.max_tokens"], 512)
+ self.assertEqual(bedrock_runtime_attributes["gen_ai.request.temperature"], 0.5)
+ self.assertEqual(bedrock_runtime_attributes["gen_ai.request.top_p"], 0.9)
+ self.assertEqual(
+ bedrock_runtime_attributes["gen_ai.usage.input_tokens"], math.ceil(len(request_body["prompt"]) / 6)
+ )
+ response_body = {"outputs": [{"text": "Goodbye, World", "stop_reason": "stop"}]}
+ json_bytes = json.dumps(response_body).encode("utf-8")
+ body_bytes = BytesIO(json_bytes)
+ streaming_body = StreamingBody(body_bytes, len(json_bytes))
+ bedrock_runtime_success_attributes: Dict[str, str] = _do_on_success_bedrock(
+ "bedrock-runtime", model_id="mistral", streaming_body=streaming_body
+ )
+
+ self.assertEqual(
+ bedrock_runtime_success_attributes["gen_ai.usage.output_tokens"],
+ math.ceil(len(response_body["outputs"][0]["text"]) / 6),
+ )
+ self.assertEqual(bedrock_runtime_success_attributes["gen_ai.response.finish_reasons"], ["stop"])
# SecretsManager
self.assertTrue("secretsmanager" in _KNOWN_EXTENSIONS)
@@ -385,26 +586,27 @@ def _do_extract_sqs_attributes() -> Dict[str, str]:
return _do_extract_attributes(service_name, params)
-def _do_extract_attributes_bedrock(service, operation=None) -> Dict[str, str]:
+def _do_extract_attributes_bedrock(service, operation=None, model_id=None, request_body=None) -> Dict[str, str]:
params: Dict[str, Any] = {
"agentId": _BEDROCK_AGENT_ID,
"dataSourceId": _BEDROCK_DATASOURCE_ID,
"knowledgeBaseId": _BEDROCK_KNOWLEDGEBASE_ID,
"guardrailId": _BEDROCK_GUARDRAIL_ID,
- "modelId": _GEN_AI_REQUEST_MODEL,
+ "modelId": model_id,
+ "body": request_body,
}
return _do_extract_attributes(service, params, operation)
-def _do_on_success_bedrock(service, operation=None) -> Dict[str, str]:
+def _do_on_success_bedrock(service, operation=None, model_id=None, streaming_body=None) -> Dict[str, str]:
result: Dict[str, Any] = {
"agentId": _BEDROCK_AGENT_ID,
"dataSourceId": _BEDROCK_DATASOURCE_ID,
"knowledgeBaseId": _BEDROCK_KNOWLEDGEBASE_ID,
"guardrailId": _BEDROCK_GUARDRAIL_ID,
- "modelId": _GEN_AI_REQUEST_MODEL,
+ "body": streaming_body,
}
- return _do_on_success(service, result, operation)
+ return _do_on_success(service, result, operation, params={"modelId": model_id})
def _do_extract_secretsmanager_attributes() -> Dict[str, str]:
From 642427ec514387452df5030598b83ab686211442 Mon Sep 17 00:00:00 2001
From: Steve Liu
Date: Thu, 21 Nov 2024 17:25:54 -0800
Subject: [PATCH 6/9] feat: Add Contract Tests for new Gen AI attributes for
foundational models (#292)
contract tests for new gen_ai inference parameters added in
https://github.com/aws-observability/aws-otel-python-instrumentation/pull/290
By submitting this pull request, I confirm that you can use, modify,
copy, and redistribute this contribution, under the terms of your
choice.
---------
Co-authored-by: Michael He <53622546+yiyuan-he@users.noreply.github.com>
---
.../applications/botocore/botocore_server.py | 164 +++++++++++++--
.../test/amazon/base/contract_test_base.py | 8 +-
.../test/amazon/botocore/botocore_test.py | 186 ++++++++++++++++--
3 files changed, 327 insertions(+), 31 deletions(-)
diff --git a/contract-tests/images/applications/botocore/botocore_server.py b/contract-tests/images/applications/botocore/botocore_server.py
index f16948390..d1736d56c 100644
--- a/contract-tests/images/applications/botocore/botocore_server.py
+++ b/contract-tests/images/applications/botocore/botocore_server.py
@@ -6,6 +6,7 @@
import tempfile
from collections import namedtuple
from http.server import BaseHTTPRequestHandler, ThreadingHTTPServer
+from io import BytesIO
from threading import Thread
import boto3
@@ -13,6 +14,7 @@
from botocore.client import BaseClient
from botocore.config import Config
from botocore.exceptions import ClientError
+from botocore.response import StreamingBody
from typing_extensions import Tuple, override
_PORT: int = 8080
@@ -285,28 +287,22 @@ def _handle_bedrock_request(self) -> None:
},
)
elif self.in_path("invokemodel/invoke-model"):
+ model_id, request_body, response_body = get_model_request_response(self.path)
+
set_main_status(200)
bedrock_runtime_client.meta.events.register(
"before-call.bedrock-runtime.InvokeModel",
- inject_200_success,
- )
- model_id = "amazon.titan-text-premier-v1:0"
- user_message = "Describe the purpose of a 'hello world' program in one line."
- prompt = f"[INST] {user_message} [/INST]"
- body = json.dumps(
- {
- "inputText": prompt,
- "textGenerationConfig": {
- "maxTokenCount": 3072,
- "stopSequences": [],
- "temperature": 0.7,
- "topP": 0.9,
- },
- }
+ lambda **kwargs: inject_200_success(
+ modelId=model_id,
+ body=response_body,
+ **kwargs,
+ ),
)
accept = "application/json"
content_type = "application/json"
- bedrock_runtime_client.invoke_model(body=body, modelId=model_id, accept=accept, contentType=content_type)
+ bedrock_runtime_client.invoke_model(
+ body=request_body, modelId=model_id, accept=accept, contentType=content_type
+ )
else:
set_main_status(404)
@@ -378,6 +374,137 @@ def _end_request(self, status_code: int):
self.end_headers()
+def get_model_request_response(path):
+ prompt = "Describe the purpose of a 'hello world' program in one line."
+ model_id = ""
+ request_body = {}
+ response_body = {}
+
+ if "amazon.titan" in path:
+ model_id = "amazon.titan-text-premier-v1:0"
+
+ request_body = {
+ "inputText": prompt,
+ "textGenerationConfig": {
+ "maxTokenCount": 3072,
+ "stopSequences": [],
+ "temperature": 0.7,
+ "topP": 0.9,
+ },
+ }
+
+ response_body = {
+ "inputTextTokenCount": 15,
+ "results": [
+ {
+ "tokenCount": 13,
+ "outputText": "text-test-response",
+ "completionReason": "CONTENT_FILTERED",
+ },
+ ],
+ }
+
+ if "anthropic.claude" in path:
+ model_id = "anthropic.claude-v2:1"
+
+ request_body = {
+ "anthropic_version": "bedrock-2023-05-31",
+ "max_tokens": 1000,
+ "temperature": 0.99,
+ "top_p": 1,
+ "messages": [
+ {
+ "role": "user",
+ "content": [{"type": "text", "text": prompt}],
+ },
+ ],
+ }
+
+ response_body = {
+ "stop_reason": "end_turn",
+ "usage": {
+ "input_tokens": 15,
+ "output_tokens": 13,
+ },
+ }
+
+ if "meta.llama" in path:
+ model_id = "meta.llama2-13b-chat-v1"
+
+ request_body = {"prompt": prompt, "max_gen_len": 512, "temperature": 0.5, "top_p": 0.9}
+
+ response_body = {"prompt_token_count": 31, "generation_token_count": 49, "stop_reason": "stop"}
+
+ if "cohere.command" in path:
+ model_id = "cohere.command-r-v1:0"
+
+ request_body = {
+ "chat_history": [],
+ "message": prompt,
+ "max_tokens": 512,
+ "temperature": 0.5,
+ "p": 0.65,
+ }
+
+ response_body = {
+ "chat_history": [
+ {"role": "USER", "message": prompt},
+ {"role": "CHATBOT", "message": "test-text-output"},
+ ],
+ "finish_reason": "COMPLETE",
+ "text": "test-generation-text",
+ }
+
+ if "ai21.jamba" in path:
+ model_id = "ai21.jamba-1-5-large-v1:0"
+
+ request_body = {
+ "messages": [
+ {
+ "role": "user",
+ "content": prompt,
+ },
+ ],
+ "top_p": 0.8,
+ "temperature": 0.6,
+ "max_tokens": 512,
+ }
+
+ response_body = {
+ "stop_reason": "end_turn",
+ "usage": {
+ "prompt_tokens": 21,
+ "completion_tokens": 24,
+ },
+ "choices": [
+ {"finish_reason": "stop"},
+ ],
+ }
+
+ if "mistral" in path:
+ model_id = "mistral.mistral-7b-instruct-v0:2"
+
+ request_body = {
+ "prompt": prompt,
+ "max_tokens": 4096,
+ "temperature": 0.75,
+ "top_p": 0.99,
+ }
+
+ response_body = {
+ "outputs": [
+ {
+ "text": "test-output-text",
+ "stop_reason": "stop",
+ },
+ ]
+ }
+
+ json_bytes = json.dumps(response_body).encode("utf-8")
+
+ return model_id, json.dumps(request_body), StreamingBody(BytesIO(json_bytes), len(json_bytes))
+
+
def set_main_status(status: int) -> None:
RequestHandler.main_status = status
@@ -490,11 +617,16 @@ def inject_200_success(**kwargs):
guardrail_arn = kwargs.get("guardrailArn")
if guardrail_arn is not None:
response_body["guardrailArn"] = guardrail_arn
+ model_id = kwargs.get("modelId")
+ if model_id is not None:
+ response_body["modelId"] = model_id
HTTPResponse = namedtuple("HTTPResponse", ["status_code", "headers", "body"])
headers = kwargs.get("headers", {})
body = kwargs.get("body", "")
+ response_body["body"] = body
http_response = HTTPResponse(200, headers=headers, body=body)
+
return http_response, response_body
diff --git a/contract-tests/tests/test/amazon/base/contract_test_base.py b/contract-tests/tests/test/amazon/base/contract_test_base.py
index ba96530b0..64569450b 100644
--- a/contract-tests/tests/test/amazon/base/contract_test_base.py
+++ b/contract-tests/tests/test/amazon/base/contract_test_base.py
@@ -173,6 +173,12 @@ def _assert_int_attribute(self, attributes_dict: Dict[str, AnyValue], key: str,
self.assertIsNotNone(actual_value)
self.assertEqual(expected_value, actual_value.int_value)
+ def _assert_float_attribute(self, attributes_dict: Dict[str, AnyValue], key: str, expected_value: float) -> None:
+ self.assertIn(key, attributes_dict)
+ actual_value: AnyValue = attributes_dict[key]
+ self.assertIsNotNone(actual_value)
+ self.assertEqual(expected_value, actual_value.double_value)
+
def _assert_match_attribute(self, attributes_dict: Dict[str, AnyValue], key: str, pattern: str) -> None:
self.assertIn(key, attributes_dict)
actual_value: AnyValue = attributes_dict[key]
@@ -237,5 +243,5 @@ def _is_valid_regex(self, pattern: str) -> bool:
try:
re.compile(pattern)
return True
- except re.error:
+ except (re.error, StopIteration, RuntimeError, KeyError):
return False
diff --git a/contract-tests/tests/test/amazon/botocore/botocore_test.py b/contract-tests/tests/test/amazon/botocore/botocore_test.py
index f5ae91a59..b2821a8b6 100644
--- a/contract-tests/tests/test/amazon/botocore/botocore_test.py
+++ b/contract-tests/tests/test/amazon/botocore/botocore_test.py
@@ -1,5 +1,6 @@
# Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
# SPDX-License-Identifier: Apache-2.0
+import math
from logging import INFO, Logger, getLogger
from typing import Dict, List
@@ -34,13 +35,21 @@
_AWS_BEDROCK_GUARDRAIL_ID: str = "aws.bedrock.guardrail.id"
_AWS_BEDROCK_KNOWLEDGE_BASE_ID: str = "aws.bedrock.knowledge_base.id"
_AWS_BEDROCK_DATA_SOURCE_ID: str = "aws.bedrock.data_source.id"
+
_GEN_AI_REQUEST_MODEL: str = "gen_ai.request.model"
+_GEN_AI_REQUEST_TEMPERATURE: str = "gen_ai.request.temperature"
+_GEN_AI_REQUEST_TOP_P: str = "gen_ai.request.top_p"
+_GEN_AI_REQUEST_MAX_TOKENS: str = "gen_ai.request.max_tokens"
+_GEN_AI_RESPONSE_FINISH_REASONS: str = "gen_ai.response.finish_reasons"
+_GEN_AI_USAGE_INPUT_TOKENS: str = "gen_ai.usage.input_tokens"
+_GEN_AI_USAGE_OUTPUT_TOKENS: str = "gen_ai.usage.output_tokens"
+
_AWS_SECRET_ARN: str = "aws.secretsmanager.secret.arn"
_AWS_STATE_MACHINE_ARN: str = "aws.stepfunctions.state_machine.arn"
_AWS_ACTIVITY_ARN: str = "aws.stepfunctions.activity.arn"
-# pylint: disable=too-many-public-methods
+# pylint: disable=too-many-public-methods,too-many-lines
class BotocoreTest(ContractTestBase):
_local_stack: LocalStackContainer
@@ -403,9 +412,9 @@ def test_kinesis_fault(self):
span_name="Kinesis.PutRecord",
)
- def test_bedrock_runtime_invoke_model(self):
+ def test_bedrock_runtime_invoke_model_amazon_titan(self):
self.do_test_requests(
- "bedrock/invokemodel/invoke-model",
+ "bedrock/invokemodel/invoke-model/amazon.titan-text-premier-v1:0",
"GET",
200,
0,
@@ -418,6 +427,153 @@ def test_bedrock_runtime_invoke_model(self):
cloudformation_primary_identifier="amazon.titan-text-premier-v1:0",
request_specific_attributes={
_GEN_AI_REQUEST_MODEL: "amazon.titan-text-premier-v1:0",
+ _GEN_AI_REQUEST_MAX_TOKENS: 3072,
+ _GEN_AI_REQUEST_TEMPERATURE: 0.7,
+ _GEN_AI_REQUEST_TOP_P: 0.9,
+ },
+ response_specific_attributes={
+ _GEN_AI_RESPONSE_FINISH_REASONS: ["CONTENT_FILTERED"],
+ _GEN_AI_USAGE_INPUT_TOKENS: 15,
+ _GEN_AI_USAGE_OUTPUT_TOKENS: 13,
+ },
+ span_name="Bedrock Runtime.InvokeModel",
+ )
+
+ def test_bedrock_runtime_invoke_model_anthropic_claude(self):
+ self.do_test_requests(
+ "bedrock/invokemodel/invoke-model/anthropic.claude-v2:1",
+ "GET",
+ 200,
+ 0,
+ 0,
+ rpc_service="Bedrock Runtime",
+ remote_service="AWS::BedrockRuntime",
+ remote_operation="InvokeModel",
+ remote_resource_type="AWS::Bedrock::Model",
+ remote_resource_identifier="anthropic.claude-v2:1",
+ cloudformation_primary_identifier="anthropic.claude-v2:1",
+ request_specific_attributes={
+ _GEN_AI_REQUEST_MODEL: "anthropic.claude-v2:1",
+ _GEN_AI_REQUEST_MAX_TOKENS: 1000,
+ _GEN_AI_REQUEST_TEMPERATURE: 0.99,
+ _GEN_AI_REQUEST_TOP_P: 1,
+ },
+ response_specific_attributes={
+ _GEN_AI_RESPONSE_FINISH_REASONS: ["end_turn"],
+ _GEN_AI_USAGE_INPUT_TOKENS: 15,
+ _GEN_AI_USAGE_OUTPUT_TOKENS: 13,
+ },
+ span_name="Bedrock Runtime.InvokeModel",
+ )
+
+ def test_bedrock_runtime_invoke_model_meta_llama(self):
+ self.do_test_requests(
+ "bedrock/invokemodel/invoke-model/meta.llama2-13b-chat-v1",
+ "GET",
+ 200,
+ 0,
+ 0,
+ rpc_service="Bedrock Runtime",
+ remote_service="AWS::BedrockRuntime",
+ remote_operation="InvokeModel",
+ remote_resource_type="AWS::Bedrock::Model",
+ remote_resource_identifier="meta.llama2-13b-chat-v1",
+ cloudformation_primary_identifier="meta.llama2-13b-chat-v1",
+ request_specific_attributes={
+ _GEN_AI_REQUEST_MODEL: "meta.llama2-13b-chat-v1",
+ _GEN_AI_REQUEST_MAX_TOKENS: 512,
+ _GEN_AI_REQUEST_TEMPERATURE: 0.5,
+ _GEN_AI_REQUEST_TOP_P: 0.9,
+ },
+ response_specific_attributes={
+ _GEN_AI_RESPONSE_FINISH_REASONS: ["stop"],
+ _GEN_AI_USAGE_INPUT_TOKENS: 31,
+ _GEN_AI_USAGE_OUTPUT_TOKENS: 49,
+ },
+ span_name="Bedrock Runtime.InvokeModel",
+ )
+
+ def test_bedrock_runtime_invoke_model_cohere_command(self):
+ self.do_test_requests(
+ "bedrock/invokemodel/invoke-model/cohere.command-r-v1:0",
+ "GET",
+ 200,
+ 0,
+ 0,
+ rpc_service="Bedrock Runtime",
+ remote_service="AWS::BedrockRuntime",
+ remote_operation="InvokeModel",
+ remote_resource_type="AWS::Bedrock::Model",
+ remote_resource_identifier="cohere.command-r-v1:0",
+ cloudformation_primary_identifier="cohere.command-r-v1:0",
+ request_specific_attributes={
+ _GEN_AI_REQUEST_MODEL: "cohere.command-r-v1:0",
+ _GEN_AI_REQUEST_MAX_TOKENS: 512,
+ _GEN_AI_REQUEST_TEMPERATURE: 0.5,
+ _GEN_AI_REQUEST_TOP_P: 0.65,
+ },
+ response_specific_attributes={
+ _GEN_AI_RESPONSE_FINISH_REASONS: ["COMPLETE"],
+ _GEN_AI_USAGE_INPUT_TOKENS: math.ceil(
+ len("Describe the purpose of a 'hello world' program in one line.") / 6
+ ),
+ _GEN_AI_USAGE_OUTPUT_TOKENS: math.ceil(len("test-generation-text") / 6),
+ },
+ span_name="Bedrock Runtime.InvokeModel",
+ )
+
+ def test_bedrock_runtime_invoke_model_ai21_jamba(self):
+ self.do_test_requests(
+ "bedrock/invokemodel/invoke-model/ai21.jamba-1-5-large-v1:0",
+ "GET",
+ 200,
+ 0,
+ 0,
+ rpc_service="Bedrock Runtime",
+ remote_service="AWS::BedrockRuntime",
+ remote_operation="InvokeModel",
+ remote_resource_type="AWS::Bedrock::Model",
+ remote_resource_identifier="ai21.jamba-1-5-large-v1:0",
+ cloudformation_primary_identifier="ai21.jamba-1-5-large-v1:0",
+ request_specific_attributes={
+ _GEN_AI_REQUEST_MODEL: "ai21.jamba-1-5-large-v1:0",
+ _GEN_AI_REQUEST_MAX_TOKENS: 512,
+ _GEN_AI_REQUEST_TEMPERATURE: 0.6,
+ _GEN_AI_REQUEST_TOP_P: 0.8,
+ },
+ response_specific_attributes={
+ _GEN_AI_RESPONSE_FINISH_REASONS: ["stop"],
+ _GEN_AI_USAGE_INPUT_TOKENS: 21,
+ _GEN_AI_USAGE_OUTPUT_TOKENS: 24,
+ },
+ span_name="Bedrock Runtime.InvokeModel",
+ )
+
+ def test_bedrock_runtime_invoke_model_mistral(self):
+ self.do_test_requests(
+ "bedrock/invokemodel/invoke-model/mistral.mistral-7b-instruct-v0:2",
+ "GET",
+ 200,
+ 0,
+ 0,
+ rpc_service="Bedrock Runtime",
+ remote_service="AWS::BedrockRuntime",
+ remote_operation="InvokeModel",
+ remote_resource_type="AWS::Bedrock::Model",
+ remote_resource_identifier="mistral.mistral-7b-instruct-v0:2",
+ cloudformation_primary_identifier="mistral.mistral-7b-instruct-v0:2",
+ request_specific_attributes={
+ _GEN_AI_REQUEST_MODEL: "mistral.mistral-7b-instruct-v0:2",
+ _GEN_AI_REQUEST_MAX_TOKENS: 4096,
+ _GEN_AI_REQUEST_TEMPERATURE: 0.75,
+ _GEN_AI_REQUEST_TOP_P: 0.99,
+ },
+ response_specific_attributes={
+ _GEN_AI_RESPONSE_FINISH_REASONS: ["stop"],
+ _GEN_AI_USAGE_INPUT_TOKENS: math.ceil(
+ len("Describe the purpose of a 'hello world' program in one line.") / 6
+ ),
+ _GEN_AI_USAGE_OUTPUT_TOKENS: math.ceil(len("test-output-text") / 6),
},
span_name="Bedrock Runtime.InvokeModel",
)
@@ -772,21 +928,23 @@ def _assert_semantic_conventions_attributes(
# TODO: botocore instrumentation is not respecting PEER_SERVICE
# self._assert_str_attribute(attributes_dict, SpanAttributes.PEER_SERVICE, "backend:8080")
for key, value in request_specific_attributes.items():
- if isinstance(value, str):
- self._assert_str_attribute(attributes_dict, key, value)
- elif isinstance(value, int):
- self._assert_int_attribute(attributes_dict, key, value)
- else:
- self._assert_array_value_ddb_table_name(attributes_dict, key, value)
+ self._assert_attribute(attributes_dict, key, value)
+
for key, value in response_specific_attributes.items():
+ self._assert_attribute(attributes_dict, key, value)
+
+ def _assert_attribute(self, attributes_dict: Dict[str, AnyValue], key, value) -> None:
+ if isinstance(value, str):
if self._is_valid_regex(value):
self._assert_match_attribute(attributes_dict, key, value)
- elif isinstance(value, str):
- self._assert_str_attribute(attributes_dict, key, value)
- elif isinstance(value, int):
- self._assert_int_attribute(attributes_dict, key, value)
else:
- self._assert_array_value_ddb_table_name(attributes_dict, key, value)
+ self._assert_str_attribute(attributes_dict, key, value)
+ elif isinstance(value, int):
+ self._assert_int_attribute(attributes_dict, key, value)
+ elif isinstance(value, float):
+ self._assert_float_attribute(attributes_dict, key, value)
+ else:
+ self._assert_array_value_ddb_table_name(attributes_dict, key, value)
@override
def _assert_metric_attributes(
From 423f955adbb8e1acab2fb97e504ec5d42d4dac34 Mon Sep 17 00:00:00 2001
From: Michael He <53622546+yiyuan-he@users.noreply.github.com>
Date: Mon, 2 Dec 2024 15:25:35 -0800
Subject: [PATCH 7/9] Add Contract Tests for SNS (#296)
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
*Description of changes:*
Set up SNS contract tests cases for `AWS::SNS::Topic` resource.
Note: Was not able to get the `error` path to work so added a `TODO`
comment for now. It seems boto3 handles the response code for this
resource in a special way compared to the other resources. Will need to
investigate further to figure out how we can add this case.
*Test Plan:*
![Screenshot 2024-11-27 at 10 52
37 AM](https://github.com/user-attachments/assets/121a5829-e960-4292-b3c5-8ceb9c6f9d6c)
By submitting this pull request, I confirm that you can use, modify,
copy, and redistribute this contribution, under the terms of your
choice.
---
.../applications/botocore/botocore_server.py | 28 +++++++++++++
.../test/amazon/botocore/botocore_test.py | 42 ++++++++++++++++++-
2 files changed, 69 insertions(+), 1 deletion(-)
diff --git a/contract-tests/images/applications/botocore/botocore_server.py b/contract-tests/images/applications/botocore/botocore_server.py
index d1736d56c..0575f4d88 100644
--- a/contract-tests/images/applications/botocore/botocore_server.py
+++ b/contract-tests/images/applications/botocore/botocore_server.py
@@ -52,6 +52,8 @@ def do_GET(self):
self._handle_secretsmanager_request()
if self.in_path("stepfunctions"):
self._handle_stepfunctions_request()
+ if self.in_path("sns"):
+ self._handle_sns_request()
self._end_request(self.main_status)
@@ -369,6 +371,27 @@ def _handle_stepfunctions_request(self) -> None:
else:
set_main_status(404)
+ def _handle_sns_request(self) -> None:
+ sns_client = boto3.client("sns", endpoint_url=_AWS_SDK_ENDPOINT, region_name=_AWS_REGION)
+ if self.in_path(_FAULT):
+ set_main_status(500)
+ try:
+ fault_client = boto3.client("sns", endpoint_url=_FAULT_ENDPOINT, region_name=_AWS_REGION)
+ fault_client.meta.events.register(
+ "before-call.sns.GetTopicAttributes",
+ lambda **kwargs: inject_500_error("GetTopicAttributes", **kwargs),
+ )
+ fault_client.get_topic_attributes(TopicArn="arn:aws:sns:us-west-2:000000000000:invalid-topic")
+ except Exception as exception:
+ print("Expected exception occurred", exception)
+ elif self.in_path("gettopicattributes/test-topic"):
+ set_main_status(200)
+ sns_client.get_topic_attributes(
+ TopicArn="arn:aws:sns:us-west-2:000000000000:test-topic",
+ )
+ else:
+ set_main_status(404)
+
def _end_request(self, status_code: int):
self.send_response_only(status_code)
self.end_headers()
@@ -557,6 +580,11 @@ def prepare_aws_server() -> None:
Name="testSecret", SecretString="secretValue", Description="This is a test secret"
)
+ # Set up SNS so tests can access a topic.
+ sns_client: BaseClient = boto3.client("sns", endpoint_url=_AWS_SDK_ENDPOINT, region_name=_AWS_REGION)
+ create_topic_response = sns_client.create_topic(Name="test-topic")
+ print("Created topic successfully:", create_topic_response)
+
# Set up Step Functions so tests can access a state machine and activity.
sfn_client: BaseClient = boto3.client("stepfunctions", endpoint_url=_AWS_SDK_ENDPOINT, region_name=_AWS_REGION)
sfn_response = sfn_client.list_state_machines()
diff --git a/contract-tests/tests/test/amazon/botocore/botocore_test.py b/contract-tests/tests/test/amazon/botocore/botocore_test.py
index b2821a8b6..c8a346f5e 100644
--- a/contract-tests/tests/test/amazon/botocore/botocore_test.py
+++ b/contract-tests/tests/test/amazon/botocore/botocore_test.py
@@ -47,6 +47,7 @@
_AWS_SECRET_ARN: str = "aws.secretsmanager.secret.arn"
_AWS_STATE_MACHINE_ARN: str = "aws.stepfunctions.state_machine.arn"
_AWS_ACTIVITY_ARN: str = "aws.stepfunctions.activity.arn"
+_AWS_SNS_TOPIC_ARN: str = "aws.sns.topic.arn"
# pylint: disable=too-many-public-methods,too-many-lines
@@ -86,7 +87,7 @@ def set_up_dependency_container(cls):
cls._local_stack: LocalStackContainer = (
LocalStackContainer(image="localstack/localstack:3.5.0")
.with_name("localstack")
- .with_services("s3", "sqs", "dynamodb", "kinesis", "secretsmanager", "iam", "stepfunctions")
+ .with_services("s3", "sqs", "dynamodb", "kinesis", "secretsmanager", "iam", "stepfunctions", "sns")
.with_env("DEFAULT_REGION", "us-west-2")
.with_kwargs(network=NETWORK_NAME, networking_config=local_stack_networking_config)
)
@@ -752,6 +753,45 @@ def test_secretsmanager_fault(self):
span_name="Secrets Manager.GetSecretValue",
)
+ def test_sns_get_topic_attributes(self):
+ self.do_test_requests(
+ "sns/gettopicattributes/test-topic",
+ "GET",
+ 200,
+ 0,
+ 0,
+ rpc_service="SNS",
+ remote_service="AWS::SNS",
+ remote_operation="GetTopicAttributes",
+ remote_resource_type="AWS::SNS::Topic",
+ remote_resource_identifier="test-topic",
+ cloudformation_primary_identifier="arn:aws:sns:us-west-2:000000000000:test-topic",
+ request_specific_attributes={_AWS_SNS_TOPIC_ARN: "arn:aws:sns:us-west-2:000000000000:test-topic"},
+ span_name="SNS.GetTopicAttributes",
+ )
+
+ # TODO: Add error case for sns - our test setup is not setting the http status code properly
+ # for this resource
+
+ def test_sns_fault(self):
+ self.do_test_requests(
+ "sns/fault",
+ "GET",
+ 500,
+ 0,
+ 1,
+ rpc_service="SNS",
+ remote_service="AWS::SNS",
+ remote_operation="GetTopicAttributes",
+ remote_resource_type="AWS::SNS::Topic",
+ remote_resource_identifier="invalid-topic",
+ cloudformation_primary_identifier="arn:aws:sns:us-west-2:000000000000:invalid-topic",
+ request_specific_attributes={
+ _AWS_SNS_TOPIC_ARN: "arn:aws:sns:us-west-2:000000000000:invalid-topic",
+ },
+ span_name="SNS.GetTopicAttributes",
+ )
+
def test_stepfunctions_describe_state_machine(self):
self.do_test_requests(
"stepfunctions/describestatemachine/my-state-machine",
From e06706e9f1eeefa3a32adc4c194cea379ae61021 Mon Sep 17 00:00:00 2001
From: Mahad Janjua
Date: Fri, 6 Dec 2024 13:54:51 -0800
Subject: [PATCH 8/9] [EKS/Test] Always run all versions of python for EKS main
build
---
.github/workflows/application-signals-e2e-test.yml | 4 ++++
1 file changed, 4 insertions(+)
diff --git a/.github/workflows/application-signals-e2e-test.yml b/.github/workflows/application-signals-e2e-test.yml
index 97a6e3b83..c7a5731a9 100644
--- a/.github/workflows/application-signals-e2e-test.yml
+++ b/.github/workflows/application-signals-e2e-test.yml
@@ -117,6 +117,7 @@ jobs:
python-version: '3.8'
eks-v3-9-amd64:
+ if: ${{ always() }}
needs: eks-v3-8-amd64
uses: aws-observability/aws-application-signals-test-framework/.github/workflows/python-eks-test.yml@main
secrets: inherit
@@ -128,6 +129,7 @@ jobs:
python-version: '3.9'
eks-v3-10-amd64:
+ if: ${{ always() }}
needs: eks-v3-9-amd64
uses: aws-observability/aws-application-signals-test-framework/.github/workflows/python-eks-test.yml@main
secrets: inherit
@@ -139,6 +141,7 @@ jobs:
python-version: '3.10'
eks-v3-11-amd64:
+ if: ${{ always() }}
needs: eks-v3-10-amd64
uses: aws-observability/aws-application-signals-test-framework/.github/workflows/python-eks-test.yml@main
secrets: inherit
@@ -150,6 +153,7 @@ jobs:
python-version: '3.11'
eks-v3-12-amd64:
+ if: ${{ always() }}
needs: eks-v3-11-amd64
uses: aws-observability/aws-application-signals-test-framework/.github/workflows/python-eks-test.yml@main
secrets: inherit
From 11f16d2f0768004b03d2563a082ba55e96f68ec0 Mon Sep 17 00:00:00 2001
From: Harry
Date: Fri, 13 Dec 2024 15:24:31 -0800
Subject: [PATCH 9/9] Update dependency version - dependabot alert (#301)
*Issue #, if available:*
Dependant bot is giving the following security alert but it is unable to
create a PR for it automatically and throwing an error
-
https://github.com/aws-observability/aws-otel-python-instrumentation/security/dependabot/21
-
https://github.com/aws-observability/aws-otel-python-instrumentation/security/dependabot/19
-
https://github.com/aws-observability/aws-otel-python-instrumentation/security/dependabot/20
This PR will manually update the version.
*Description of changes:*
Updated dependencies to the version with security issue patched
By submitting this pull request, I confirm that you can use, modify,
copy, and redistribute this contribution, under the terms of your
choice.
---
contract-tests/images/applications/django/requirements.txt | 2 +-
.../images/applications/mysql-connector/requirements.txt | 2 +-
2 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/contract-tests/images/applications/django/requirements.txt b/contract-tests/images/applications/django/requirements.txt
index 0af2ec462..34a9bd3fa 100644
--- a/contract-tests/images/applications/django/requirements.txt
+++ b/contract-tests/images/applications/django/requirements.txt
@@ -1,4 +1,4 @@
opentelemetry-distro==0.46b0
opentelemetry-exporter-otlp-proto-grpc==1.25.0
typing-extensions==4.9.0
-django==5.0.9
\ No newline at end of file
+django==5.0.10
\ No newline at end of file
diff --git a/contract-tests/images/applications/mysql-connector/requirements.txt b/contract-tests/images/applications/mysql-connector/requirements.txt
index 615275526..2910612dc 100644
--- a/contract-tests/images/applications/mysql-connector/requirements.txt
+++ b/contract-tests/images/applications/mysql-connector/requirements.txt
@@ -1,4 +1,4 @@
opentelemetry-distro==0.46b0
opentelemetry-exporter-otlp-proto-grpc==1.25.0
typing-extensions==4.9.0
-mysql-connector-python~=8.0
\ No newline at end of file
+mysql-connector-python~=9.1.0
\ No newline at end of file