Skip to content

Running many models on Bedrock with output_type fails with "This model doesn't support the toolConfig.toolChoice.any field." #2091

@shrik450

Description

@shrik450

Initial Checks

Description

Running many bedrock models (I've tested Llama 4 and Nova) will always fail if the Agent has an output_type set. This happens even if there are NO tools in the agent. This seems to have been introduced in 0.2.12 and has persisted through to 0.3.4. The repro example I've included uses a pydantic Model, but this also happens with a dataclass or even a primitive like float. Versions 0.2.11 and before seem to be fine.

Example Code

#!/usr/bin/env -S uv run --script
# /// script
# requires-python = ">=3.13"
# dependencies = [
#     "pydantic-ai==0.3.4",
#     "dotenv",
# ]
# ///


from pydantic_ai import Agent


async def main():
    agent = Agent(
        model="bedrock:us.meta.llama4-scout-17b-instruct-v1:0",
        system_prompt="You are a helpful assistant.",
        output_type=float,
    )

    input_data = "How many pounds are in 10 kilograms?"

    response = await agent.run(input_data)

    print(response.output)


if __name__ == "__main__":
    import asyncio

    asyncio.run(main())

Output:

botocore.errorfactory.ValidationException: An error occurred (ValidationException) when calling the Converse operation: This model doesn't support the toolConfig.toolChoice.any field. Remove toolConfig.toolChoice.any and try again.

E: Simplified repro script.

Python, Pydantic AI & LLM client version

Python 3.13, pydantic-ai 0.3.4.

Metadata

Metadata

Assignees

Labels

bugSomething isn't working

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions