Skip to content

Commit e894e36

Browse files
authored
feat: add OpenAI-compatible Bedrock provider (#3748)
Implements AWS Bedrock inference provider using OpenAI-compatible endpoint for Llama models available through Bedrock. Closes: #3410 ## What does this PR do? Adds AWS Bedrock as an inference provider using the OpenAI-compatible endpoint. This lets us use Bedrock models (GPT-OSS, Llama) through the standard llama-stack inference API. The implementation uses LiteLLM's OpenAI client under the hood, so it gets all the OpenAI compatibility features. The provider handles per-request API key overrides via headers. ## Test Plan **Tested the following scenarios:** - Non-streaming completion - basic request/response flow - Streaming completion - SSE streaming with chunked responses - Multi-turn conversations - context retention across turns - Tool calling - function calling with proper tool_calls format # Bedrock OpenAI-Compatible Provider - Test Results **Model:** `bedrock-inference/openai.gpt-oss-20b-1:0` --- ## Test 1: Model Listing **Request:** ```http GET /v1/models HTTP/1.1 ``` **Response:** ```http HTTP/1.1 200 OK Content-Type: application/json { "data": [ {"identifier": "bedrock-inference/openai.gpt-oss-20b-1:0", ...}, {"identifier": "bedrock-inference/openai.gpt-oss-40b-1:0", ...} ] } ``` --- ## Test 2: Non-Streaming Completion **Request:** ```http POST /v1/chat/completions HTTP/1.1 Content-Type: application/json { "model": "bedrock-inference/openai.gpt-oss-20b-1:0", "messages": [{"role": "user", "content": "Say 'Hello from Bedrock' and nothing else"}], "stream": false } ``` **Response:** ```http HTTP/1.1 200 OK Content-Type: application/json { "choices": [{ "finish_reason": "stop", "message": {"content": "...Hello from Bedrock"} }], "usage": {"prompt_tokens": 79, "completion_tokens": 50, "total_tokens": 129} } ``` --- ## Test 3: Streaming Completion **Request:** ```http POST /v1/chat/completions HTTP/1.1 Content-Type: application/json { "model": "bedrock-inference/openai.gpt-oss-20b-1:0", "messages": [{"role": "user", "content": "Count from 1 to 5"}], "stream": true } ``` **Response:** ```http HTTP/1.1 200 OK Content-Type: text/event-stream [6 SSE chunks received] Final content: "1, 2, 3, 4, 5" ``` --- ## Test 4: Error Handling - Invalid Model **Request:** ```http POST /v1/chat/completions HTTP/1.1 Content-Type: application/json { "model": "invalid-model-id", "messages": [{"role": "user", "content": "Hello"}], "stream": false } ``` **Response:** ```http HTTP/1.1 404 Not Found Content-Type: application/json { "detail": "Model 'invalid-model-id' not found. Use 'client.models.list()' to list available Models." } ``` --- ## Test 5: Multi-Turn Conversation **Request 1:** ```http POST /v1/chat/completions HTTP/1.1 { "messages": [{"role": "user", "content": "My name is Alice"}] } ``` **Response 1:** ```http HTTP/1.1 200 OK { "choices": [{ "message": {"content": "...Nice to meet you, Alice! How can I help you today?"} }] } ``` **Request 2 (with history):** ```http POST /v1/chat/completions HTTP/1.1 { "messages": [ {"role": "user", "content": "My name is Alice"}, {"role": "assistant", "content": "...Nice to meet you, Alice!..."}, {"role": "user", "content": "What is my name?"} ] } ``` **Response 2:** ```http HTTP/1.1 200 OK { "choices": [{ "message": {"content": "...Your name is Alice."} }], "usage": {"prompt_tokens": 183, "completion_tokens": 42} } ``` **Context retained across turns** --- ## Test 6: System Messages **Request:** ```http POST /v1/chat/completions HTTP/1.1 { "messages": [ {"role": "system", "content": "You are Shakespeare. Respond only in Shakespearean English."}, {"role": "user", "content": "Tell me about the weather"} ] } ``` **Response:** ```http HTTP/1.1 200 OK { "choices": [{ "message": {"content": "Lo! I heed thy request..."} }], "usage": {"completion_tokens": 813} } ``` --- ## Test 7: Tool Calling **Request:** ```http POST /v1/chat/completions HTTP/1.1 { "messages": [{"role": "user", "content": "What's the weather in San Francisco?"}], "tools": [{ "type": "function", "function": { "name": "get_weather", "parameters": {"type": "object", "properties": {"location": {"type": "string"}}} } }] } ``` **Response:** ```http HTTP/1.1 200 OK { "choices": [{ "finish_reason": "tool_calls", "message": { "tool_calls": [{ "function": {"name": "get_weather", "arguments": "{\"location\":\"San Francisco\"}"} }] } }] } ``` --- ## Test 8: Sampling Parameters **Request:** ```http POST /v1/chat/completions HTTP/1.1 { "messages": [{"role": "user", "content": "Say hello"}], "temperature": 0.7, "top_p": 0.9 } ``` **Response:** ```http HTTP/1.1 200 OK { "choices": [{ "message": {"content": "...Hello! 👋 How can I help you today?"} }] } ``` --- ## Test 9: Authentication Error Handling ### Subtest A: Invalid API Key **Request:** ```http POST /v1/chat/completions HTTP/1.1 x-llamastack-provider-data: {"aws_bedrock_api_key": "invalid-fake-key-12345"} {"model": "bedrock-inference/openai.gpt-oss-20b-1:0", ...} ``` **Response:** ```http HTTP/1.1 400 Bad Request { "detail": "Invalid value: Authentication failed: Error code: 401 - {'error': {'message': 'Invalid API Key format: Must start with pre-defined prefix', ...}}" } ``` --- ### Subtest B: Empty API Key (Fallback to Config) **Request:** ```http POST /v1/chat/completions HTTP/1.1 x-llamastack-provider-data: {"aws_bedrock_api_key": ""} {"model": "bedrock-inference/openai.gpt-oss-20b-1:0", ...} ``` **Response:** ```http HTTP/1.1 200 OK { "choices": [{ "message": {"content": "...Hello! How can I assist you today?"} }] } ``` **Fell back to config key** --- ### Subtest C: Malformed Token **Request:** ```http POST /v1/chat/completions HTTP/1.1 x-llamastack-provider-data: {"aws_bedrock_api_key": "not-a-valid-bedrock-token-format"} {"model": "bedrock-inference/openai.gpt-oss-20b-1:0", ...} ``` **Response:** ```http HTTP/1.1 400 Bad Request { "detail": "Invalid value: Authentication failed: Error code: 401 - {'error': {'message': 'Invalid API Key format: Must start with pre-defined prefix', ...}}" } ```
1 parent a2c4c12 commit e894e36

File tree

15 files changed

+307
-188
lines changed

15 files changed

+307
-188
lines changed
Lines changed: 6 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
---
2-
description: "AWS Bedrock inference provider for accessing various AI models through AWS's managed service."
2+
description: "AWS Bedrock inference provider using OpenAI compatible endpoint."
33
sidebar_label: Remote - Bedrock
44
title: remote::bedrock
55
---
@@ -8,27 +8,20 @@ title: remote::bedrock
88

99
## Description
1010

11-
AWS Bedrock inference provider for accessing various AI models through AWS's managed service.
11+
AWS Bedrock inference provider using OpenAI compatible endpoint.
1212

1313
## Configuration
1414

1515
| Field | Type | Required | Default | Description |
1616
|-------|------|----------|---------|-------------|
1717
| `allowed_models` | `list[str \| None` | No | | List of models that should be registered with the model registry. If None, all models are allowed. |
1818
| `refresh_models` | `<class 'bool'>` | No | False | Whether to refresh models periodically from the provider |
19-
| `aws_access_key_id` | `str \| None` | No | | The AWS access key to use. Default use environment variable: AWS_ACCESS_KEY_ID |
20-
| `aws_secret_access_key` | `str \| None` | No | | The AWS secret access key to use. Default use environment variable: AWS_SECRET_ACCESS_KEY |
21-
| `aws_session_token` | `str \| None` | No | | The AWS session token to use. Default use environment variable: AWS_SESSION_TOKEN |
22-
| `region_name` | `str \| None` | No | | The default AWS Region to use, for example, us-west-1 or us-west-2.Default use environment variable: AWS_DEFAULT_REGION |
23-
| `profile_name` | `str \| None` | No | | The profile name that contains credentials to use.Default use environment variable: AWS_PROFILE |
24-
| `total_max_attempts` | `int \| None` | No | | An integer representing the maximum number of attempts that will be made for a single request, including the initial attempt. Default use environment variable: AWS_MAX_ATTEMPTS |
25-
| `retry_mode` | `str \| None` | No | | A string representing the type of retries Boto3 will perform.Default use environment variable: AWS_RETRY_MODE |
26-
| `connect_timeout` | `float \| None` | No | 60.0 | The time in seconds till a timeout exception is thrown when attempting to make a connection. The default is 60 seconds. |
27-
| `read_timeout` | `float \| None` | No | 60.0 | The time in seconds till a timeout exception is thrown when attempting to read from a connection.The default is 60 seconds. |
28-
| `session_ttl` | `int \| None` | No | 3600 | The time in seconds till a session expires. The default is 3600 seconds (1 hour). |
19+
| `api_key` | `pydantic.types.SecretStr \| None` | No | | Authentication credential for the provider |
20+
| `region_name` | `<class 'str'>` | No | us-east-2 | AWS Region for the Bedrock Runtime endpoint |
2921

3022
## Sample Configuration
3123

3224
```yaml
33-
{}
25+
api_key: ${env.AWS_BEDROCK_API_KEY:=}
26+
region_name: ${env.AWS_DEFAULT_REGION:=us-east-2}
3427
```

src/llama_stack/core/routers/inference.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -190,7 +190,7 @@ async def openai_completion(
190190

191191
response = await provider.openai_completion(params)
192192
response.model = request_model_id
193-
if self.telemetry_enabled:
193+
if self.telemetry_enabled and response.usage is not None:
194194
metrics = self._construct_metrics(
195195
prompt_tokens=response.usage.prompt_tokens,
196196
completion_tokens=response.usage.completion_tokens,
@@ -253,7 +253,7 @@ async def openai_chat_completion(
253253
if self.store:
254254
asyncio.create_task(self.store.store_chat_completion(response, params.messages))
255255

256-
if self.telemetry_enabled:
256+
if self.telemetry_enabled and response.usage is not None:
257257
metrics = self._construct_metrics(
258258
prompt_tokens=response.usage.prompt_tokens,
259259
completion_tokens=response.usage.completion_tokens,

src/llama_stack/distributions/ci-tests/run.yaml

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -46,6 +46,9 @@ providers:
4646
api_key: ${env.TOGETHER_API_KEY:=}
4747
- provider_id: bedrock
4848
provider_type: remote::bedrock
49+
config:
50+
api_key: ${env.AWS_BEDROCK_API_KEY:=}
51+
region_name: ${env.AWS_DEFAULT_REGION:=us-east-2}
4952
- provider_id: ${env.NVIDIA_API_KEY:+nvidia}
5053
provider_type: remote::nvidia
5154
config:

src/llama_stack/distributions/starter-gpu/run-with-postgres-store.yaml

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -46,6 +46,9 @@ providers:
4646
api_key: ${env.TOGETHER_API_KEY:=}
4747
- provider_id: bedrock
4848
provider_type: remote::bedrock
49+
config:
50+
api_key: ${env.AWS_BEDROCK_API_KEY:=}
51+
region_name: ${env.AWS_DEFAULT_REGION:=us-east-2}
4952
- provider_id: ${env.NVIDIA_API_KEY:+nvidia}
5053
provider_type: remote::nvidia
5154
config:

src/llama_stack/distributions/starter-gpu/run.yaml

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -46,6 +46,9 @@ providers:
4646
api_key: ${env.TOGETHER_API_KEY:=}
4747
- provider_id: bedrock
4848
provider_type: remote::bedrock
49+
config:
50+
api_key: ${env.AWS_BEDROCK_API_KEY:=}
51+
region_name: ${env.AWS_DEFAULT_REGION:=us-east-2}
4952
- provider_id: ${env.NVIDIA_API_KEY:+nvidia}
5053
provider_type: remote::nvidia
5154
config:

src/llama_stack/distributions/starter/run-with-postgres-store.yaml

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -46,6 +46,9 @@ providers:
4646
api_key: ${env.TOGETHER_API_KEY:=}
4747
- provider_id: bedrock
4848
provider_type: remote::bedrock
49+
config:
50+
api_key: ${env.AWS_BEDROCK_API_KEY:=}
51+
region_name: ${env.AWS_DEFAULT_REGION:=us-east-2}
4952
- provider_id: ${env.NVIDIA_API_KEY:+nvidia}
5053
provider_type: remote::nvidia
5154
config:

src/llama_stack/distributions/starter/run.yaml

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -46,6 +46,9 @@ providers:
4646
api_key: ${env.TOGETHER_API_KEY:=}
4747
- provider_id: bedrock
4848
provider_type: remote::bedrock
49+
config:
50+
api_key: ${env.AWS_BEDROCK_API_KEY:=}
51+
region_name: ${env.AWS_DEFAULT_REGION:=us-east-2}
4952
- provider_id: ${env.NVIDIA_API_KEY:+nvidia}
5053
provider_type: remote::nvidia
5154
config:

src/llama_stack/providers/registry/inference.py

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -138,10 +138,11 @@ def available_providers() -> list[ProviderSpec]:
138138
api=Api.inference,
139139
adapter_type="bedrock",
140140
provider_type="remote::bedrock",
141-
pip_packages=["boto3"],
141+
pip_packages=[],
142142
module="llama_stack.providers.remote.inference.bedrock",
143143
config_class="llama_stack.providers.remote.inference.bedrock.BedrockConfig",
144-
description="AWS Bedrock inference provider for accessing various AI models through AWS's managed service.",
144+
provider_data_validator="llama_stack.providers.remote.inference.bedrock.config.BedrockProviderDataValidator",
145+
description="AWS Bedrock inference provider using OpenAI compatible endpoint.",
145146
),
146147
RemoteProviderSpec(
147148
api=Api.inference,

src/llama_stack/providers/remote/inference/bedrock/__init__.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,7 @@ async def get_adapter_impl(config: BedrockConfig, _deps):
1111

1212
assert isinstance(config, BedrockConfig), f"Unexpected config type: {type(config)}"
1313

14-
impl = BedrockInferenceAdapter(config)
14+
impl = BedrockInferenceAdapter(config=config)
1515

1616
await impl.initialize()
1717

src/llama_stack/providers/remote/inference/bedrock/bedrock.py

Lines changed: 88 additions & 103 deletions
Original file line numberDiff line numberDiff line change
@@ -4,139 +4,124 @@
44
# This source code is licensed under the terms described in the LICENSE file in
55
# the root directory of this source tree.
66

7-
import json
8-
from collections.abc import AsyncIterator
7+
from collections.abc import AsyncIterator, Iterable
98

10-
from botocore.client import BaseClient
9+
from openai import AuthenticationError
1110

1211
from llama_stack.apis.inference import (
13-
ChatCompletionRequest,
14-
Inference,
12+
OpenAIChatCompletion,
13+
OpenAIChatCompletionChunk,
1514
OpenAIChatCompletionRequestWithExtraBody,
15+
OpenAICompletion,
1616
OpenAICompletionRequestWithExtraBody,
1717
OpenAIEmbeddingsRequestWithExtraBody,
1818
OpenAIEmbeddingsResponse,
1919
)
20-
from llama_stack.apis.inference.inference import (
21-
OpenAIChatCompletion,
22-
OpenAIChatCompletionChunk,
23-
OpenAICompletion,
24-
)
25-
from llama_stack.providers.remote.inference.bedrock.config import BedrockConfig
26-
from llama_stack.providers.utils.bedrock.client import create_bedrock_client
27-
from llama_stack.providers.utils.inference.model_registry import (
28-
ModelRegistryHelper,
29-
)
30-
from llama_stack.providers.utils.inference.openai_compat import (
31-
get_sampling_strategy_options,
32-
)
33-
from llama_stack.providers.utils.inference.prompt_adapter import (
34-
chat_completion_request_to_prompt,
35-
)
36-
37-
from .models import MODEL_ENTRIES
38-
39-
REGION_PREFIX_MAP = {
40-
"us": "us.",
41-
"eu": "eu.",
42-
"ap": "ap.",
43-
}
44-
45-
46-
def _get_region_prefix(region: str | None) -> str:
47-
# AWS requires region prefixes for inference profiles
48-
if region is None:
49-
return "us." # default to US when we don't know
50-
51-
# Handle case insensitive region matching
52-
region_lower = region.lower()
53-
for prefix in REGION_PREFIX_MAP:
54-
if region_lower.startswith(f"{prefix}-"):
55-
return REGION_PREFIX_MAP[prefix]
56-
57-
# Fallback to US for anything we don't recognize
58-
return "us."
59-
60-
61-
def _to_inference_profile_id(model_id: str, region: str = None) -> str:
62-
# Return ARNs unchanged
63-
if model_id.startswith("arn:"):
64-
return model_id
65-
66-
# Return inference profile IDs that already have regional prefixes
67-
if any(model_id.startswith(p) for p in REGION_PREFIX_MAP.values()):
68-
return model_id
69-
70-
# Default to US East when no region is provided
71-
if region is None:
72-
region = "us-east-1"
73-
74-
return _get_region_prefix(region) + model_id
75-
20+
from llama_stack.core.telemetry.tracing import get_current_span
21+
from llama_stack.log import get_logger
22+
from llama_stack.providers.utils.inference.openai_mixin import OpenAIMixin
7623

77-
class BedrockInferenceAdapter(
78-
ModelRegistryHelper,
79-
Inference,
80-
):
81-
def __init__(self, config: BedrockConfig) -> None:
82-
ModelRegistryHelper.__init__(self, model_entries=MODEL_ENTRIES)
83-
self._config = config
84-
self._client = None
24+
from .config import BedrockConfig
8525

86-
@property
87-
def client(self) -> BaseClient:
88-
if self._client is None:
89-
self._client = create_bedrock_client(self._config)
90-
return self._client
26+
logger = get_logger(name=__name__, category="inference::bedrock")
9127

92-
async def initialize(self) -> None:
93-
pass
9428

95-
async def shutdown(self) -> None:
96-
if self._client is not None:
97-
self._client.close()
29+
class BedrockInferenceAdapter(OpenAIMixin):
30+
"""
31+
Adapter for AWS Bedrock's OpenAI-compatible API endpoints.
9832
99-
async def _get_params_for_chat_completion(self, request: ChatCompletionRequest) -> dict:
100-
bedrock_model = request.model
33+
Supports Llama models across regions and GPT-OSS models (us-west-2 only).
10134
102-
sampling_params = request.sampling_params
103-
options = get_sampling_strategy_options(sampling_params)
35+
Note: Bedrock's OpenAI-compatible endpoint does not support /v1/models
36+
for dynamic model discovery. Models must be pre-registered in the config.
37+
"""
10438

105-
if sampling_params.max_tokens:
106-
options["max_gen_len"] = sampling_params.max_tokens
107-
if sampling_params.repetition_penalty > 0:
108-
options["repetition_penalty"] = sampling_params.repetition_penalty
39+
config: BedrockConfig
40+
provider_data_api_key_field: str = "aws_bedrock_api_key"
10941

110-
prompt = await chat_completion_request_to_prompt(request, self.get_llama_model(request.model))
42+
def get_base_url(self) -> str:
43+
"""Get base URL for OpenAI client."""
44+
return f"https://bedrock-runtime.{self.config.region_name}.amazonaws.com/openai/v1"
11145

112-
# Convert foundation model ID to inference profile ID
113-
region_name = self.client.meta.region_name
114-
inference_profile_id = _to_inference_profile_id(bedrock_model, region_name)
46+
async def list_provider_model_ids(self) -> Iterable[str]:
47+
"""
48+
Bedrock's OpenAI-compatible endpoint does not support the /v1/models endpoint.
49+
Returns empty list since models must be pre-registered in the config.
50+
"""
51+
return []
11552

116-
return {
117-
"modelId": inference_profile_id,
118-
"body": json.dumps(
119-
{
120-
"prompt": prompt,
121-
**options,
122-
}
123-
),
124-
}
53+
async def check_model_availability(self, model: str) -> bool:
54+
"""
55+
Bedrock doesn't support dynamic model listing via /v1/models.
56+
Always return True to accept all models registered in the config.
57+
"""
58+
return True
12559

12660
async def openai_embeddings(
12761
self,
12862
params: OpenAIEmbeddingsRequestWithExtraBody,
12963
) -> OpenAIEmbeddingsResponse:
130-
raise NotImplementedError()
64+
"""Bedrock's OpenAI-compatible API does not support the /v1/embeddings endpoint."""
65+
raise NotImplementedError(
66+
"Bedrock's OpenAI-compatible API does not support /v1/embeddings endpoint. "
67+
"See https://docs.aws.amazon.com/bedrock/latest/userguide/inference-chat-completions.html"
68+
)
13169

13270
async def openai_completion(
13371
self,
13472
params: OpenAICompletionRequestWithExtraBody,
13573
) -> OpenAICompletion:
136-
raise NotImplementedError("OpenAI completion not supported by the Bedrock provider")
74+
"""Bedrock's OpenAI-compatible API does not support the /v1/completions endpoint."""
75+
raise NotImplementedError(
76+
"Bedrock's OpenAI-compatible API does not support /v1/completions endpoint. "
77+
"Only /v1/chat/completions is supported. "
78+
"See https://docs.aws.amazon.com/bedrock/latest/userguide/inference-chat-completions.html"
79+
)
13780

13881
async def openai_chat_completion(
13982
self,
14083
params: OpenAIChatCompletionRequestWithExtraBody,
14184
) -> OpenAIChatCompletion | AsyncIterator[OpenAIChatCompletionChunk]:
142-
raise NotImplementedError("OpenAI chat completion not supported by the Bedrock provider")
85+
"""Override to enable streaming usage metrics and handle authentication errors."""
86+
# Enable streaming usage metrics when telemetry is active
87+
if params.stream and get_current_span() is not None:
88+
if params.stream_options is None:
89+
params.stream_options = {"include_usage": True}
90+
elif "include_usage" not in params.stream_options:
91+
params.stream_options = {**params.stream_options, "include_usage": True}
92+
93+
try:
94+
logger.debug(f"Calling Bedrock OpenAI API with model={params.model}, stream={params.stream}")
95+
result = await super().openai_chat_completion(params=params)
96+
logger.debug(f"Bedrock API returned: {type(result).__name__ if result is not None else 'None'}")
97+
98+
if result is None:
99+
logger.error(f"Bedrock OpenAI client returned None for model={params.model}, stream={params.stream}")
100+
raise RuntimeError(
101+
f"Bedrock API returned no response for model '{params.model}'. "
102+
"This may indicate the model is not supported or a network/API issue occurred."
103+
)
104+
105+
return result
106+
except AuthenticationError as e:
107+
error_msg = str(e)
108+
109+
# Check if this is a token expiration error
110+
if "expired" in error_msg.lower() or "Bearer Token has expired" in error_msg:
111+
logger.error(f"AWS Bedrock authentication token expired: {error_msg}")
112+
raise ValueError(
113+
"AWS Bedrock authentication failed: Bearer token has expired. "
114+
"The AWS_BEDROCK_API_KEY environment variable contains an expired pre-signed URL. "
115+
"Please refresh your token by generating a new pre-signed URL with AWS credentials. "
116+
"Refer to AWS Bedrock documentation for details on OpenAI-compatible endpoints."
117+
) from e
118+
else:
119+
logger.error(f"AWS Bedrock authentication failed: {error_msg}")
120+
raise ValueError(
121+
f"AWS Bedrock authentication failed: {error_msg}. "
122+
"Please verify your API key is correct in the provider config or x-llamastack-provider-data header. "
123+
"The API key should be a valid AWS pre-signed URL for Bedrock's OpenAI-compatible endpoint."
124+
) from e
125+
except Exception as e:
126+
logger.error(f"Unexpected error calling Bedrock API: {type(e).__name__}: {e}", exc_info=True)
127+
raise

0 commit comments

Comments
 (0)