-
Notifications
You must be signed in to change notification settings - Fork 27
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
refactor(runner): add InferenceError to all pipelines #188
Conversation
This commit adds the inference error logic from the SAM2 pipeline to all pipelines so users are given a warning when they supply wrong arguments.
f08aced
to
a469892
Compare
This commit ensures that users get a descriptive error message when the GPU runs out of memory.
This commit ensures that all response errors are known by FastAPI and therefore shown in the docs.
This commit adds some missing error handling to the pipeline worker functions.
This commit improves the out of memory error handling by using the native torch error.
dba6dc8
to
53d76eb
Compare
This commit ensures that errors thrown by the runner are forwarded to the orchestrator. It applies the logic used by the SAM2 and audio-to-text pipelines to the other pipelines.
This commit applies the black formatter to the PR files.
This commit removes the redundant str call.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added one comment, other than that LGTM
runner/app/routes/image_to_video.py
Outdated
Returns: | ||
A JSONResponse with the appropriate error message and status code. | ||
""" | ||
if isinstance(e, torch.cuda.OutOfMemoryError): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see this part of the code is repeated in every pipeline, would it maybe make sense to extract it and reuse?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@leszko thanks for the quick review. Yea I was planning to extract it similar to the logic @mjh1 implemented in #226 (See #226 (comment)). I however wanted to do this in a subsequent pull request. I can do it here and co-author @mjh1 👍🏻.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@leszko as also stated in the discord I added a variant of @gioelecerati's logic in ae8df74.
This commit introduces a global error handling configuration and function to streamline error management across different pipelines. The new `handle_pipeline_exception` function centralizes error handling logic, allowing pipelines to override it if necessary. This change reduces code duplication and improves maintainability. Co-authored-by: rickstaa <rick.staa@outlook.com>
21a3d77
to
ae8df74
Compare
This commit ensures that pipelines can overwrite the default error message when the Global error configuration contains a empty string.
This commit adds a test for the 'handle_pipeline_exception' route utility function. It also fixes some errors into that function.
This pull request adds the inference error logic from the SAM2 pipeline to all pipelines so users are given a warning when they supply wrong arguments. Please note that the 422 thrown by FastAPI are not handled corretly and we don't do validation at the go-livepeer side yet (see https://linear.app/livepeer-ai/project/improve-error-handling-117c11ff78f3/issues). As a result although this pull request improves the behavoir we don't yet have an optimised solution.