-
Notifications
You must be signed in to change notification settings - Fork 1k
Closed
Labels
.NETa2aIssue relates to A2AIssue relates to A2Av1.0Features being tracked for the version 1.0 GAFeatures being tracked for the version 1.0 GA
Description
From our main application we are calling an agent using A2A, with the following example code. We are creating the AgentRunOptions to retrieve the token usage from the request to the LLM the Agent wil do.
var a2aClient = new A2AClient(new Uri(agentUrl));
var agent = a2aClient.GetAIAgent();
var messages = new List<ChatMessage>
{
new ChatMessage(ChatRole.User, "test Tell me something interesting.")
};
var dict = new AdditionalPropertiesDictionary
{
{ "model_extras", new { stream_options = new { include_usage = true } } }
};
var options = new AgentRunOptions { AdditionalProperties = dict };
var result = await agent.RunAsync(messages, thread: null, options: options);In our project hosting the agent, we map the agent like this:
builder.Services.AddSingleton<TestAgent>();
var app = builder.Build();
var testAgent = app.Services.GetRequiredService<TestAgent>();
app.MapA2A(testAgent, "/agent/test", agentCard: new()
{
Name = "Test Agent",
Description = "A test agent for usage information",
Version = "1.0.0",
});When the requests arrives in the TestAgent the AgentRunOptions are null:
public override async Task<AgentRunResponse> RunAsync(IEnumerable<ChatMessage> messages, AgentThread? thread = null,
AgentRunOptions? options = null, CancellationToken cancellationToken = default)
{
using var activity = ActivitySource.StartActivity("TestAgent.Run");
activity?.SetTag("agent.name", Name);
activity?.SetTag("agent.id", Id);
// Get Azure OpenAI configuration
var endpoint = new Uri(configuration["Azure:Endpoint"]!);
var apiKey = configuration["Azure:ApiKey"]!;
var deploymentName = configuration["Azure:ModelId"] ?? "gpt-4.1";
var userService = UserService.Create(httpContextAccessor.HttpContext);
// Create Azure OpenAI client
var client = new AzureOpenAIClient(endpoint, new AzureKeyCredential(apiKey));
var chatClient = client.GetOpenAIResponseClient(deploymentName).AsIChatClient();
var instructions = @"
You are a helpful assistant.
Provide a brief, friendly response to the user's message.
Keep responses concise (1-2 sentences).
";
// Create ChatClientAgent
var chatAgent = new ChatClientAgent(chatClient, instructions, "TestAgent");
// Ensure options include usage information
var runOptions = options ?? new AgentRunOptions();
if (runOptions.AdditionalProperties == null || !runOptions.AdditionalProperties.ContainsKey("model_extras"))
{
runOptions.AdditionalProperties ??= new AdditionalPropertiesDictionary();
runOptions.AdditionalProperties["model_extras"] = new { stream_options = new { include_usage = true } };
}
// Delegate to ChatClientAgent
var result = await chatAgent.RunAsync(messages, null, options: runOptions, cancellationToken);
// Extract the usage information from the result and include it in the result AdditionalProperties
var usageJson = ExtractUsageFromResponse(result);
if (!string.IsNullOrEmpty(usageJson))
{
result.AdditionalProperties ??= new AdditionalPropertiesDictionary();
result.AdditionalProperties["usage"] = usageJson;
if (result.Messages.Count > 0) {
var lastMessage = result.Messages.Last();
var content = lastMessage.Contents.LastOrDefault();
if (content != null) {
content.RawRepresentation = result.AdditionalProperties;
}
}
}
return result;
}Is this intended behaviour?
Not sure I can ask a follow up question, but when the options are null, I hard code the AgentRunOptions to ask for token usage. The agent then returns an AgentRunResponse with usage statistics in it, except those never arrive in our main programming, calling the A2A Agent.
Metadata
Metadata
Assignees
Labels
.NETa2aIssue relates to A2AIssue relates to A2Av1.0Features being tracked for the version 1.0 GAFeatures being tracked for the version 1.0 GA
Type
Projects
Status
Done