Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix(openai): handle async streaming responses for openai v1 client #421

Merged
merged 5 commits into from
Feb 13, 2024

Conversation

najork
Copy link
Contributor

@najork najork commented Feb 13, 2024

No description provided.

@najork najork changed the title fix(openai): handle async streaming responses fix(openai): handle async streaming responses for openai v1 client Feb 13, 2024
Comment on lines +23 to +39
@pytest.mark.vcr
async def test_async_completion(exporter, async_openai_client):
await async_openai_client.completions.create(
model="davinci-002",
prompt="Tell me a joke about opentelemetry",
)

spans = exporter.get_finished_spans()
assert [span.name for span in spans] == [
"openai.completion",
]
open_ai_span = spans[0]
assert (
open_ai_span.attributes["llm.prompts.0.user"]
== "Tell me a joke about opentelemetry"
)
assert open_ai_span.attributes.get("llm.completions.0.content")
Copy link
Contributor Author

@najork najork Feb 13, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm happy to split this into its own PR, figured some test coverage for non-streaming async calls would also be beneficial.

@nirga
Copy link
Member

nirga commented Feb 13, 2024

Thanks @najork! Looks like the tests aren't actually running since we're missing some pytest async plugin (see test run logs).
When they do run - you'll need to run them locally. Here's how to do that:

  1. Set the environment variable OPENAI_API_KEY with an actual API key.
  2. Comment out this line
  3. Run poetry run pytest -v --record-mode=once so that we'll have a recording of the requests + responses for future tests. Don't worry, this does not log your API key in any way, just the response completion you got from OpenAI so that the test can be run without actually calling OpenAI.

@@ -60,22 +60,23 @@ async def acompletion_wrapper(tracer, wrapped, instance, args, kwargs):
if context_api.get_value(_SUPPRESS_INSTRUMENTATION_KEY):
return wrapped(*args, **kwargs)

async with start_as_current_span_async(
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This was causing span.is_recording() to return false in _abuild_from_streaming_response, in turn causing the response attribute setter to short-circuit, and the test assertion for span content to fail

@nirga nirga merged commit 5af77b5 into traceloop:main Feb 13, 2024
7 checks passed
@najork najork deleted the patch-1 branch February 13, 2024 23:08
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants