-
Notifications
You must be signed in to change notification settings - Fork 1.4k
Description
Initial Checks
- I confirm that I'm using the latest version of Pydantic AI
- I confirm that I searched for my issue in https://github.com/pydantic/pydantic-ai/issues before opening this issue
Description
Bedrock/Llama Issue
I've recently been testing PydanticAI on AWS Bedrock. When testing against bedrock:us.meta.llama4-maverick-17b-instruct-v1:0 and a structured output type I get the following error:
botocore.errorfactory.ValidationException: An error occurred (ValidationException) when calling the Converse operation: This model doesn't support the toolConfig.toolChoice.any field. Remove toolConfig.toolChoice.any and try again.
Note from my example that if I set a custom profile and set bedrock_supports_tool_choice = False, the requests complete successfully.
This may not be new I found a related issue here that is marked as closed #2091
It seems like this option should be set to False by default for the bedrock/meta models.
Bedrock/Claude Observation
Another observation is that this option is set to False for anthropic models on AWS, but in my test here, it clearly works (at least for ClaudeSonnet4 with thinking disabled). See BedrockProvider.model_profile. @DouweM Any idea why this is set to false?
Question/Advice
In my test code below you can see how I update the ModelProfile with better defaults as I discover them. But I wonder is there an easier way to do this?
Setting Profile Op-1
provider = BedrockProvider()
profile = provider.model_profile(self.BEDROCK_LLAMA4_MODEL_ID)
if not isinstance(profile, BedrockModelProfile):
profile = BedrockModelProfile.from_profile(profile)
profile.bedrock_supports_tool_choice = False
agent = Agent(
BedrockConverseModel(
self.BEDROCK_LLAMA4_MODEL_ID,
provider=provider,
profile=profile,
),
system_prompt="List countries based on the user request. Only output real countries.",
output_type=list[Country],
)Setting Profile Op-2
agent = Agent(
f"{'bedrock'}:{self.BEDROCK_LLAMA4_MODEL_ID}",
system_prompt="List countries based on the user request. Only output real countries.",
output_type=list[Country],
)
profile = BedrockModelProfile.from_profile(agent.model.profile)
profile.bedrock_supports_tool_choice = False
agent.model.profile = profileApologies for munging in the question, I'm new to this library and trying to find the best way to work with it for multi-model or multi-provider prompts.
Example Code
from unittest import TestCase
from pydantic import BaseModel, Field
from pydantic_ai import Agent
from pydantic_ai.models.bedrock import BedrockConverseModel, BedrockModelProfile
from pydantic_ai.providers.bedrock import BedrockProvider
class Country(BaseModel):
name: str = Field(..., description="The name of the country")
capital: str = Field(..., description="The capital city of the country")
class TestLlamaOnBedrock(TestCase):
BEDROCK_LLAMA4_MODEL_ID = "us.meta.llama4-maverick-17b-instruct-v1:0"
BEDROCK_CLAUDE_SONNET4_MODEL_ID = "us.anthropic.claude-sonnet-4-20250514-v1:0"
def test_llama_on_bedrock_with_tool_choice_disabled(self):
"""
This test is to prove Llama/Bedrock works when we set bedrock_supports_tool_choice = False
in the profile.
"""
provider = BedrockProvider()
profile = provider.model_profile(self.BEDROCK_LLAMA4_MODEL_ID)
if not isinstance(profile, BedrockModelProfile):
profile = BedrockModelProfile.from_profile(profile)
profile.bedrock_supports_tool_choice = False
agent = Agent(
BedrockConverseModel(
self.BEDROCK_LLAMA4_MODEL_ID,
provider=provider,
profile=profile,
),
system_prompt="List countries based on the user request. Only output real countries.",
output_type=list[Country],
)
result = agent.run_sync("Give me three random countries and their capitals.")
self.assertIsNotNone(result.output)
self.assertIsInstance(result.output, list)
self.assertTrue(all(isinstance(country, Country) for country in result.output))
def test_llama_on_bedrock_with_tool_choice_disabled_easy(self):
"""
This is maybe an easier way to set bedrock_supports_tool_choice = False...?
This test is to prove Llama/Bedrock works when we set bedrock_supports_tool_choice = False
in the profile.
"""
agent = Agent(
f"{'bedrock'}:{self.BEDROCK_LLAMA4_MODEL_ID}",
system_prompt="List countries based on the user request. Only output real countries.",
output_type=list[Country],
)
profile = BedrockModelProfile.from_profile(agent.model.profile)
profile.bedrock_supports_tool_choice = False
agent.model.profile = profile
result = agent.run_sync("Give me three random countries and their capitals.")
self.assertIsNotNone(result.output)
self.assertIsInstance(result.output, list)
self.assertTrue(all(isinstance(country, Country) for country in result.output))
# THIS TEST IS FAILING
def test_llama_on_bedrock_with_tool_choice_notset(self):
"""
This test is just to prove the default LLama/Bedrock profile is incorrect.
"""
agent = Agent(
f"{'bedrock'}:{self.BEDROCK_LLAMA4_MODEL_ID}",
system_prompt="List countries based on the user request. Only output real countries.",
output_type=list[Country],
)
result = agent.run_sync("Give me three random countries and their capitals.")
self.assertIsNotNone(result.output)
self.assertIsInstance(result.output, list)
self.assertTrue(all(isinstance(country, Country) for country in result.output))
def test_claude_on_bedrock_has_tool_choice_disabled(self):
"""This is just verifying what that the code currently has bedrock_supports_tool_choice
for Anthropic models on Bedrock"""
provider = BedrockProvider()
profile = provider.model_profile(self.BEDROCK_CLAUDE_SONNET4_MODEL_ID)
self.assertIsInstance(profile, BedrockModelProfile)
self.assertFalse(profile.bedrock_supports_tool_choice)
def test_claude_on_bedrock_with_tool_choice_notset(self):
"""
This tests claude on bedrock with the default profile. The previous test asserts
that bedrock_supports_tool_choice is already set to False in the profile.
"""
agent = Agent(
f"{'bedrock'}:{self.BEDROCK_CLAUDE_SONNET4_MODEL_ID}",
system_prompt="List countries based on the user request. Only output real countries.",
output_type=list[Country],
)
result = agent.run_sync("Give me three random countries and their capitals.")
self.assertIsNotNone(result.output)
self.assertIsInstance(result.output, list)
self.assertTrue(all(isinstance(country, Country) for country in result.output))
def test_claude_on_bedrock_with_tool_choice_enabled(self):
"""
This test is to prove Llama/Bedrock works when we set bedrock_supports_tool_choice = False
in the profile.
"""
provider = BedrockProvider()
profile = provider.model_profile(self.BEDROCK_CLAUDE_SONNET4_MODEL_ID)
if not isinstance(profile, BedrockModelProfile):
profile = BedrockModelProfile.from_profile(profile)
profile.bedrock_supports_tool_choice = True
agent = Agent(
BedrockConverseModel(
self.BEDROCK_CLAUDE_SONNET4_MODEL_ID,
provider=provider,
profile=profile,
),
system_prompt="List countries based on the user request. Only output real countries.",
output_type=list[Country],
)
result = agent.run_sync("Give me three random countries and their capitals.")
self.assertIsNotNone(result.output)
self.assertIsInstance(result.output, list)
self.assertTrue(all(isinstance(country, Country) for country in result.output))Python, Pydantic AI & LLM client version
Python: 3.12.11
Pydantic AI: 0.8.1