Skip to content

Python: Multi‑turn with AgentFrameworkAgent + AzureAIClient to Azure AI Projects Foundry v2 agent fails on 2nd turn: agent_framework_azure_ai sends invalid input to agent/Responses #2964

@djw-bsn

Description

@djw-bsn

I realise that everything here is in beta/preview but I thought it was still worth reporting the bug anyway...

When AzureAIClient is configured to use an Azure AI Projects agent (via AIProjectClient.get_openai_client() + extra_body["agent"]), the input produced by _prepare_input + _openai_content_parser is an OpenAI-style chat array, which Azure AI Projects rejects with invalid_payload (expects string / annotated string).

When using agent_framework_azure_ai.AzureAIClient with an Azure AI Projects Foundry v2 agent (via AIProjectClient.get_openai_client() ), multi‑turn chat via AgentFrameworkAgent fails on the second turn. Internally, the call to Azure AI Projects agent/Responses (/openai/responses) returns 400 invalid_payload, indicating that /input is an array but should be a string, and that /input/1/... is missing required type/annotations.

Example 2nd message:

{
  "messages": [
    {"role": "system",   "content": "System: \n\n"},
    {"role": "system",   "content": "System: \n\n"},
    {"role": "user",     "content": "test 1"},
    {"role": "assistant","content": "Hello! How can I assist you today?"},
    {"role": "user",     "content": "test 2"},
  ],
  "threadId": "XXXX",
}

Partial payload sent to agent responses api:

# ....
    "input": [
        {
            "role": "user",
            "content": [
                {
                    "type": "input_text",
                    "text": "test 1"
                }
            ]
        },
        {
            "role": "assistant",
            "content": [
                {
                    "type": "output_text",
                    "text": "Hello! How can I assist you today?"
                }
            ]
        },
        {
            "role": "user",
            "content": [
                {
                    "type": "input_text",
                    "text": "test 2"
                }
            ]
        }
    ], 
# ....

Server error:

'type: Value is "array" but should be "string"' at '/input',
'required: Required properties ["type"] are not present' at '/input/1',
'type: Value is "array" but should be "string"' at '/input/1/content',
'required: Required properties ["annotations"] are not present' at '/input/1/content/0'.

The error appears to be caused by how agent_framework_azure_ai._prepare_input and _openai_content_parser serialize messages into input.

Steps to reproduce

  • Minimal FastAPI server using AzureAIClient + ChatAgent + AgentFrameworkAgent.
  • Minimal httpx client posting two AG‑UI style turn bodies (messages + threadId).
  • Turn 1 (system + single user) works.
  • Turn 2 (system + prior user + prior assistant + new user) triggers the 400 from Azure AI Projects.

(Full server and client repro scripts can be provided if required.)

Key Code Paths

From agent_framework_azure_ai.AzureAIClient:

def _prepare_input(self, messages: MutableSequence[ChatMessage]) -> tuple[list[ChatMessage], str | None]:
    """Prepare input from messages and convert system/developer messages to instructions."""
    result: list[ChatMessage] = []
    instructions_list: list[str] = []
    instructions: str | None = None

    # System/developer messages are turned into instructions, since there is no such message roles in Azure AI.
    for message in messages:
        if message.role.value in ["system", "developer"]:
            for text_content in [content for content in message.contents if isinstance(content, TextContent)]:
                instructions_list.append(text_content.text)
        else:
            result.append(message)

    if len(instructions_list) > 0:
        instructions = "".join(instructions_list)

    return result, instructions

All system / developer messages are collapsed into an instructions string.
All other messages (user + assistant, including history) are left as a list that becomes the basis for input.

super().prepare_options (from OpenAIBaseResponsesClient) then uses _openai_content_parser to turn prepared_messages into the OpenAI Responses input payload.

def _openai_content_parser(
    self,
    role: Role,
    content: Contents,
    call_id_to_id: dict[str, str],
) -> dict[str, Any]:
    """Parse contents into the openai format."""
    match content:
        case TextContent():
            return {
                "type": "output_text" if role == Role.ASSISTANT else "input_text",
                "text": content.text,
            }
        # ...other content types omitted...

This builds OpenAI Responses-style{ "type": "input_text"/"output_text", "text": "..." } objects. Combined with OpenAIBaseResponsesClient, this yields an input that is an array of { role, content: [...] } chat messages. But the Azure AI Projects service rejects that input with the previously listed error.

So for this Foundry v2 agent, agent/Responses is validating input as a string / annotated string, not as a chat array. The OpenAI-style mapping that AzureAIClient currently uses via OpenAIBaseResponsesClient is incompatible with that.

Is it possible to adjust the mapping for this integration so that input is serialized in a way that Azure AI Projects agent/Responses will accept? Or is this a problem in the v2 Foundry agent responses API?

Metadata

Metadata

Labels

model clientsIssues related to the model client implementationspythonv1.0Features being tracked for the version 1.0 GA

Type

Projects

Status

No status

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions