Skip to content

Conversation

@viniciusdsmello
Copy link
Contributor

@viniciusdsmello viniciusdsmello commented Oct 14, 2025

Pull Request

Summary

This PR adds comprehensive tracing support for OpenAI's new Responses API (client.responses.create) while maintaining full backward compatibility with the existing Chat Completions API (client.chat.completions.create).

Changes

  • Updated trace_openai and trace_async_openai to dynamically detect and patch the responses.create endpoint.
  • Implemented new handler functions for both streaming and non-streaming Responses API calls (sync and async).
  • Added helper functions for Responses API-specific parameter mapping, output parsing, streaming chunk extraction, and usage data extraction.
  • Modified add_to_trace to differentiate between Chat Completions and Responses API calls for improved trace naming and metadata.
  • Created a comprehensive example (examples/tracing/openai/responses_api_example.py) demonstrating usage for both APIs, including streaming and function calling.

Context

OpenAI's Responses API unifies multiple capabilities (chat, text, tool use, JSON mode) into a single interface, providing improved metadata structure and traceability. This update extends Openlayer's tracing logic to support this new, more aligned endpoint, ensuring that users can leverage the latest OpenAI features with full observability without breaking existing integrations.

Testing

  • Unit tests (comprehensive test suites verifying backward compatibility and new Responses API features for both sync and async clients)
  • Manual testing (verified functionality using the new example script)
  • Postman CI/CD
  • Other (please specify)

@cursor
Copy link

cursor bot commented Oct 14, 2025

Cursor Agent can help with this pull request. Just @cursor in comments and I'll start working on changes in this branch.
Learn more about Cursor Agents

@viniciusdsmello viniciusdsmello changed the title Update openai tracing for responses api [OPEN-7543] Update OpenAI Wrapper to support Responses API Oct 15, 2025
@viniciusdsmello viniciusdsmello marked this pull request as ready for review October 15, 2025 14:22
@viniciusdsmello viniciusdsmello force-pushed the cursor/update-openai-tracing-for-responses-api-58e5 branch from c588901 to 82f5365 Compare October 17, 2025 18:19
@viniciusdsmello viniciusdsmello self-assigned this Oct 21, 2025
@gustavocidornelas gustavocidornelas self-assigned this Oct 22, 2025
Copy link
Contributor

@gustavocidornelas gustavocidornelas left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@viniciusdsmello, this PR is almost good to go! I left a couple of comments in some parts of the code.

Besides them, it would be good if you could squash the commits so that this PR has a single commit (following the convention we discussed).

Thanks!

@@ -0,0 +1,147 @@
{
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No need for this example. This is redundant with openai_tracing.ipynb. I would just add a new cell showing that the same tracer works with the responses API.

model_parameters=get_responses_model_parameters(kwargs),
raw_output=response.model_dump() if hasattr(response, "model_dump") else str(response),
id=inference_id,
metadata={"api_type": "responses"},
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You can remove this metadata. Unless there's an intention to leverage it somehow in the platform. From the name of the step, it's already clear that it's a Response call (instead of a chat completion call)

return result


def extract_responses_inputs(kwargs: Dict[str, Any]) -> Dict[str, Any]:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There's an issue with this function. Consequently, the inputs for the OpenAI Response steps are not rendering properly:

Image

For this step, the input should be something like:

{
    "prompt": [{"role": "user", "content": "What is 3+3?"}]
}

I believe you're just sending: {"prompt": "What is 3+3?"}.

You just need to double check the Responses API reference to ensure this function works for all input formats (e.g., single user message, multi-turn conversation, etc.)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants