-
Notifications
You must be signed in to change notification settings - Fork 1.4k
Closed
Labels
bugSomething isn't workingSomething isn't working
Description
Initial Checks
- I confirm that I'm using the latest version of Pydantic AI
- I confirm that I searched for my issue in https://github.com/pydantic/pydantic-ai/issues before opening this issue
Description
Running many bedrock models (I've tested Llama 4 and Nova) will always fail if the Agent has an output_type set. This happens even if there are NO tools in the agent. This seems to have been introduced in 0.2.12 and has persisted through to 0.3.4. The repro example I've included uses a pydantic Model, but this also happens with a dataclass or even a primitive like float. Versions 0.2.11 and before seem to be fine.
Example Code
#!/usr/bin/env -S uv run --script
# /// script
# requires-python = ">=3.13"
# dependencies = [
# "pydantic-ai==0.3.4",
# "dotenv",
# ]
# ///
from pydantic_ai import Agent
async def main():
agent = Agent(
model="bedrock:us.meta.llama4-scout-17b-instruct-v1:0",
system_prompt="You are a helpful assistant.",
output_type=float,
)
input_data = "How many pounds are in 10 kilograms?"
response = await agent.run(input_data)
print(response.output)
if __name__ == "__main__":
import asyncio
asyncio.run(main())Output:
botocore.errorfactory.ValidationException: An error occurred (ValidationException) when calling the Converse operation: This model doesn't support the toolConfig.toolChoice.any field. Remove toolConfig.toolChoice.any and try again.
E: Simplified repro script.
Python, Pydantic AI & LLM client version
Python 3.13, pydantic-ai 0.3.4.
Metadata
Metadata
Assignees
Labels
bugSomething isn't workingSomething isn't working