-
Notifications
You must be signed in to change notification settings - Fork 469
fix(openai): catch and propagate asyncio.CancelledError #14290
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
|
Bootstrap import analysisComparison of import times between this PR and base. SummaryThe average import time from this PR is: 265 ± 2 ms. The average import time from base is: 267 ± 2 ms. The import time difference between this PR and base is: -1.97 ± 0.08 ms. Import time breakdownThe following import paths have shrunk:
|
Performance SLOsCandidate: yunkim/fix-openai-asyncio-cancel (05ba2df) 🔵 No Baseline Data (24 suites)🔵 coreapiscenario - 12/12 (2 unstable)🔵 No baseline data available for this suite
|
Resolves an issue where our openai integration did not catch/propagate asyncio CancelledError cases. In OpenAI Agents SDK, the OpenAI library is used internally to make LLM calls. However in a somewhat convoluted scenario, OpenAI Agents makes two LLM calls concurrently via `asyncio.gather(...)`, in which the first one can raise an error, which results in the second LLM call being cancelled sometime later. Our OpenAI integration did not catch this and continued executing and returning the underlying LLM call instead of raising, which caused an exception in the OpenAI Agents SDK which did not expect a None response (this should've cancelled and raised before it got to this point). This fix addresses this by directly raising if we notice the asyncio task has been cancelled. We're avoiding a repro here because it's 1) difficult to simulate a non-standard exception, and 2) the repro case involves using asyncio to simulate a cancelled task, which is not trivial. ## Checklist - [x] PR author has checked that all the criteria below are met - The PR description includes an overview of the change - The PR description articulates the motivation for the change - The change includes tests OR the PR description describes a testing strategy - The PR description notes risks associated with the change, if any - Newly-added code is easy to change - The change follows the [library release note guidelines](https://ddtrace.readthedocs.io/en/stable/releasenotes.html) - The change includes or references documentation updates if necessary - Backport labels are set (if [applicable](https://ddtrace.readthedocs.io/en/latest/contributing.html#backporting)) ## Reviewer Checklist - [x] Reviewer has checked that all the criteria below are met - Title is accurate - All changes are related to the pull request's stated goal - Avoids breaking [API](https://ddtrace.readthedocs.io/en/stable/versioning.html#interfaces) changes - Testing strategy adequately addresses listed risks - Newly-added code is easy to change - Release note makes sense to a user of the library - If necessary, author has acknowledged and discussed the performance implications of this PR as reported in the benchmarks PR comment - Backport labels are set in a manner that is consistent with the [release branch maintenance policy](https://ddtrace.readthedocs.io/en/latest/contributing.html#backporting) (cherry picked from commit da30538)
Resolves an issue where our openai integration did not catch/propagate asyncio CancelledError cases. In OpenAI Agents SDK, the OpenAI library is used internally to make LLM calls. However in a somewhat convoluted scenario, OpenAI Agents makes two LLM calls concurrently via `asyncio.gather(...)`, in which the first one can raise an error, which results in the second LLM call being cancelled sometime later. Our OpenAI integration did not catch this and continued executing and returning the underlying LLM call instead of raising, which caused an exception in the OpenAI Agents SDK which did not expect a None response (this should've cancelled and raised before it got to this point). This fix addresses this by directly raising if we notice the asyncio task has been cancelled. We're avoiding a repro here because it's 1) difficult to simulate a non-standard exception, and 2) the repro case involves using asyncio to simulate a cancelled task, which is not trivial. ## Checklist - [x] PR author has checked that all the criteria below are met - The PR description includes an overview of the change - The PR description articulates the motivation for the change - The change includes tests OR the PR description describes a testing strategy - The PR description notes risks associated with the change, if any - Newly-added code is easy to change - The change follows the [library release note guidelines](https://ddtrace.readthedocs.io/en/stable/releasenotes.html) - The change includes or references documentation updates if necessary - Backport labels are set (if [applicable](https://ddtrace.readthedocs.io/en/latest/contributing.html#backporting)) ## Reviewer Checklist - [x] Reviewer has checked that all the criteria below are met - Title is accurate - All changes are related to the pull request's stated goal - Avoids breaking [API](https://ddtrace.readthedocs.io/en/stable/versioning.html#interfaces) changes - Testing strategy adequately addresses listed risks - Newly-added code is easy to change - Release note makes sense to a user of the library - If necessary, author has acknowledged and discussed the performance implications of this PR as reported in the benchmarks PR comment - Backport labels are set in a manner that is consistent with the [release branch maintenance policy](https://ddtrace.readthedocs.io/en/latest/contributing.html#backporting) (cherry picked from commit da30538)
Resolves an issue where our openai integration did not catch/propagate asyncio CancelledError cases. In OpenAI Agents SDK, the OpenAI library is used internally to make LLM calls. However in a somewhat convoluted scenario, OpenAI Agents makes two LLM calls concurrently via `asyncio.gather(...)`, in which the first one can raise an error, which results in the second LLM call being cancelled sometime later. Our OpenAI integration did not catch this and continued executing and returning the underlying LLM call instead of raising, which caused an exception in the OpenAI Agents SDK which did not expect a None response (this should've cancelled and raised before it got to this point). This fix addresses this by directly raising if we notice the asyncio task has been cancelled. We're avoiding a repro here because it's 1) difficult to simulate a non-standard exception, and 2) the repro case involves using asyncio to simulate a cancelled task, which is not trivial. - [x] PR author has checked that all the criteria below are met - The PR description includes an overview of the change - The PR description articulates the motivation for the change - The change includes tests OR the PR description describes a testing strategy - The PR description notes risks associated with the change, if any - Newly-added code is easy to change - The change follows the [library release note guidelines](https://ddtrace.readthedocs.io/en/stable/releasenotes.html) - The change includes or references documentation updates if necessary - Backport labels are set (if [applicable](https://ddtrace.readthedocs.io/en/latest/contributing.html#backporting)) - [x] Reviewer has checked that all the criteria below are met - Title is accurate - All changes are related to the pull request's stated goal - Avoids breaking [API](https://ddtrace.readthedocs.io/en/stable/versioning.html#interfaces) changes - Testing strategy adequately addresses listed risks - Newly-added code is easy to change - Release note makes sense to a user of the library - If necessary, author has acknowledged and discussed the performance implications of this PR as reported in the benchmarks PR comment - Backport labels are set in a manner that is consistent with the [release branch maintenance policy](https://ddtrace.readthedocs.io/en/latest/contributing.html#backporting) (cherry picked from commit da30538)
Resolves an issue where our openai integration did not catch/propagate asyncio CancelledError cases. In OpenAI Agents SDK, the OpenAI library is used internally to make LLM calls. However in a somewhat convoluted scenario, OpenAI Agents makes two LLM calls concurrently via `asyncio.gather(...)`, in which the first one can raise an error, which results in the second LLM call being cancelled sometime later. Our OpenAI integration did not catch this and continued executing and returning the underlying LLM call instead of raising, which caused an exception in the OpenAI Agents SDK which did not expect a None response (this should've cancelled and raised before it got to this point). This fix addresses this by directly raising if we notice the asyncio task has been cancelled. We're avoiding a repro here because it's 1) difficult to simulate a non-standard exception, and 2) the repro case involves using asyncio to simulate a cancelled task, which is not trivial. - [x] PR author has checked that all the criteria below are met - The PR description includes an overview of the change - The PR description articulates the motivation for the change - The change includes tests OR the PR description describes a testing strategy - The PR description notes risks associated with the change, if any - Newly-added code is easy to change - The change follows the [library release note guidelines](https://ddtrace.readthedocs.io/en/stable/releasenotes.html) - The change includes or references documentation updates if necessary - Backport labels are set (if [applicable](https://ddtrace.readthedocs.io/en/latest/contributing.html#backporting)) - [x] Reviewer has checked that all the criteria below are met - Title is accurate - All changes are related to the pull request's stated goal - Avoids breaking [API](https://ddtrace.readthedocs.io/en/stable/versioning.html#interfaces) changes - Testing strategy adequately addresses listed risks - Newly-added code is easy to change - Release note makes sense to a user of the library - If necessary, author has acknowledged and discussed the performance implications of this PR as reported in the benchmarks PR comment - Backport labels are set in a manner that is consistent with the [release branch maintenance policy](https://ddtrace.readthedocs.io/en/latest/contributing.html#backporting) (cherry picked from commit da30538)
…1] (#14302) Backport da30538 from #14290 to 3.11. Resolves an issue where our openai integration did not catch/propagate asyncio CancelledError cases. In OpenAI Agents SDK, the OpenAI library is used internally to make LLM calls. However in a somewhat convoluted scenario, OpenAI Agents makes two LLM calls concurrently via `asyncio.gather(...)`, in which the first one can raise an error, which results in the second LLM call being cancelled sometime later. Our OpenAI integration did not catch this and continued executing and returning the underlying LLM call instead of raising, which caused an exception in the OpenAI Agents SDK which did not expect a None response (this should've cancelled and raised before it got to this point). This fix addresses this by directly raising if we notice the asyncio task has been cancelled. We're avoiding a repro here because it's 1) difficult to simulate a non-standard exception, and 2) the repro case involves using asyncio to simulate a cancelled task, which is not trivial. ## Checklist - [x] PR author has checked that all the criteria below are met - The PR description includes an overview of the change - The PR description articulates the motivation for the change - The change includes tests OR the PR description describes a testing strategy - The PR description notes risks associated with the change, if any - Newly-added code is easy to change - The change follows the [library release note guidelines](https://ddtrace.readthedocs.io/en/stable/releasenotes.html) - The change includes or references documentation updates if necessary - Backport labels are set (if [applicable](https://ddtrace.readthedocs.io/en/latest/contributing.html#backporting)) ## Reviewer Checklist - [x] Reviewer has checked that all the criteria below are met - Title is accurate - All changes are related to the pull request's stated goal - Avoids breaking [API](https://ddtrace.readthedocs.io/en/stable/versioning.html#interfaces) changes - Testing strategy adequately addresses listed risks - Newly-added code is easy to change - Release note makes sense to a user of the library - If necessary, author has acknowledged and discussed the performance implications of this PR as reported in the benchmarks PR comment - Backport labels are set in a manner that is consistent with the [release branch maintenance policy](https://ddtrace.readthedocs.io/en/latest/contributing.html#backporting) Co-authored-by: Yun Kim <35776586+Yun-Kim@users.noreply.github.com>
…2] (#14303) Backport da30538 from #14290 to 3.12. Resolves an issue where our openai integration did not catch/propagate asyncio CancelledError cases. In OpenAI Agents SDK, the OpenAI library is used internally to make LLM calls. However in a somewhat convoluted scenario, OpenAI Agents makes two LLM calls concurrently via `asyncio.gather(...)`, in which the first one can raise an error, which results in the second LLM call being cancelled sometime later. Our OpenAI integration did not catch this and continued executing and returning the underlying LLM call instead of raising, which caused an exception in the OpenAI Agents SDK which did not expect a None response (this should've cancelled and raised before it got to this point). This fix addresses this by directly raising if we notice the asyncio task has been cancelled. We're avoiding a repro here because it's 1) difficult to simulate a non-standard exception, and 2) the repro case involves using asyncio to simulate a cancelled task, which is not trivial. ## Checklist - [x] PR author has checked that all the criteria below are met - The PR description includes an overview of the change - The PR description articulates the motivation for the change - The change includes tests OR the PR description describes a testing strategy - The PR description notes risks associated with the change, if any - Newly-added code is easy to change - The change follows the [library release note guidelines](https://ddtrace.readthedocs.io/en/stable/releasenotes.html) - The change includes or references documentation updates if necessary - Backport labels are set (if [applicable](https://ddtrace.readthedocs.io/en/latest/contributing.html#backporting)) ## Reviewer Checklist - [x] Reviewer has checked that all the criteria below are met - Title is accurate - All changes are related to the pull request's stated goal - Avoids breaking [API](https://ddtrace.readthedocs.io/en/stable/versioning.html#interfaces) changes - Testing strategy adequately addresses listed risks - Newly-added code is easy to change - Release note makes sense to a user of the library - If necessary, author has acknowledged and discussed the performance implications of this PR as reported in the benchmarks PR comment - Backport labels are set in a manner that is consistent with the [release branch maintenance policy](https://ddtrace.readthedocs.io/en/latest/contributing.html#backporting) Co-authored-by: Yun Kim <35776586+Yun-Kim@users.noreply.github.com>
Resolves an issue where our openai integration did not catch/propagate asyncio CancelledError cases. In OpenAI Agents SDK, the OpenAI library is used internally to make LLM calls. However in a somewhat convoluted scenario, OpenAI Agents makes two LLM calls concurrently via `asyncio.gather(...)`, in which the first one can raise an error, which results in the second LLM call being cancelled sometime later. Our OpenAI integration did not catch this and continued executing and returning the underlying LLM call instead of raising, which caused an exception in the OpenAI Agents SDK which did not expect a None response (this should've cancelled and raised before it got to this point). This fix addresses this by directly raising if we notice the asyncio task has been cancelled. We're avoiding a repro here because it's 1) difficult to simulate a non-standard exception, and 2) the repro case involves using asyncio to simulate a cancelled task, which is not trivial. ## Checklist - [x] PR author has checked that all the criteria below are met - The PR description includes an overview of the change - The PR description articulates the motivation for the change - The change includes tests OR the PR description describes a testing strategy - The PR description notes risks associated with the change, if any - Newly-added code is easy to change - The change follows the [library release note guidelines](https://ddtrace.readthedocs.io/en/stable/releasenotes.html) - The change includes or references documentation updates if necessary - Backport labels are set (if [applicable](https://ddtrace.readthedocs.io/en/latest/contributing.html#backporting)) ## Reviewer Checklist - [x] Reviewer has checked that all the criteria below are met - Title is accurate - All changes are related to the pull request's stated goal - Avoids breaking [API](https://ddtrace.readthedocs.io/en/stable/versioning.html#interfaces) changes - Testing strategy adequately addresses listed risks - Newly-added code is easy to change - Release note makes sense to a user of the library - If necessary, author has acknowledged and discussed the performance implications of this PR as reported in the benchmarks PR comment - Backport labels are set in a manner that is consistent with the [release branch maintenance policy](https://ddtrace.readthedocs.io/en/latest/contributing.html#backporting)
Resolves an issue where our openai integration did not catch/propagate asyncio CancelledError cases.
In OpenAI Agents SDK, the OpenAI library is used internally to make LLM calls. However in a somewhat convoluted scenario, OpenAI Agents makes two LLM calls concurrently via
asyncio.gather(...), in which the first one can raise an error, which results in the second LLM call being cancelled sometime later. Our OpenAI integration did not catch this and continued executing and returning the underlying LLM call instead of raising, which caused an exception in the OpenAI Agents SDK which did not expect a None response (this should've cancelled and raised before it got to this point).This fix addresses this by directly raising if we notice the asyncio task has been cancelled.
We're avoiding a repro here because it's 1) difficult to simulate a non-standard exception, and 2) the repro case involves using asyncio to simulate a cancelled task, which is not trivial.
Checklist
Reviewer Checklist