Skip to content

Conversation

abhibongale
Copy link

Responses and Completions have a max_output_tokens field. It is currently missing from the create and response object in Responses API.

This PR fixes it.

fixes: #3562

Copy link

meta-cla bot commented Oct 6, 2025

Hi @abhibongale!

Thank you for your pull request and welcome to our community.

Action Required

In order to merge any pull request (code, docs, etc.), we require contributors to sign our Contributor License Agreement, and we don't seem to have one on file for you.

Process

In order for us to review and merge your suggested changes, please sign at https://code.facebook.com/cla. If you are contributing on behalf of someone else (eg your employer), the individual CLA may not be sufficient and your employer may need to sign the corporate CLA.

Once the CLA is signed, our tooling will perform checks and validations. Afterwards, the pull request will be tagged with CLA signed. The tagging process may take up to 1 hour after signing. Please give it that time before contacting us about it.

If you have received this in error or have any questions, please contact us at cla@meta.com. Thanks!

Responses and Completions have a max_output_tokens field. It is currently
missing from the create and response object in Responses API.

This PR fixes it.

fixes: llamastack#3562
Signed-off-by: Abhishek Bongale <abhishekbongale@outlook.com>
@abhibongale abhibongale force-pushed the enhance/max_output_tokens branch from 5fa97b5 to bb58da2 Compare October 6, 2025 08:53
@abhibongale abhibongale changed the title Add max_output_tokens as argument to Response API feat: Add max_output_tokens as argument to Response API Oct 6, 2025
Copy link

meta-cla bot commented Oct 6, 2025

Thank you for signing our Contributor License Agreement. We can now accept your code for this (and any) Meta Open Source project. Thanks!

@meta-cla meta-cla bot added the CLA Signed This label is managed by the Meta Open Source bot. label Oct 6, 2025
messages=messages,
response_tools=tools,
temperature=temperature,
max_tokens=max_output_tokens,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why did you change the name to max_tokens here? keep the same thing?

description: "Agents
APIs for creating and interacting with agentic systems."
APIs for creating and interacting with agentic systems."
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

looks like your pre-commit setup is not correct. it needs python3.12 and pre-commit==4.3.0

_ = response.output[0].call_id


def test_response_with_max_output_tokens(compat_client, text_model_id):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

before testing, maybe keep PRs in draft state? you have not implemented the functionality.

@ashwinb ashwinb marked this pull request as draft October 10, 2025 14:33
Copy link
Contributor

@jwm4 jwm4 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think fully supporting this parameter might be more complicated than what this PR is doing. It is important to keep in mind that a single Responses call can involve multiple rounds of calling chat completions, invoking tools, and then calling chat completions again with the result. The code needs to keep track of how many tokens have been used so far, keep reducing the max_output_tokens on each call, and then probably have some special handling for the case where it runs out of tokens but it is not done with the inference + tool calling loop.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

CLA Signed This label is managed by the Meta Open Source bot.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Responses max_output_tokens

3 participants