-
Notifications
You must be signed in to change notification settings - Fork 1.3k
Add tenacity utilities/integration for improved retry handling #2282
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
In case it's useful, here is the from __future__ import annotations as _annotations
from collections.abc import AsyncIterator
from contextlib import asynccontextmanager
from dataclasses import dataclass
from typing import Literal
from tenacity import AsyncRetrying
from . import KnownModelName, Model, ModelRequestParameters, StreamedResponse
from .wrapper import WrapperModel
from ..messages import ModelMessage, ModelResponse
from ..settings import ModelSettings
@dataclass(init=False)
class RetryModel(WrapperModel):
def __init__(
self,
wrapped: Model | KnownModelName,
retry: AsyncRetrying | None = None,
retry_stream: AsyncRetrying | Literal[False] | None = None,
):
super().__init__(wrapped)
self.controller = retry
self.stream_controller = retry if retry_stream is None else retry_stream
async def request(
self,
messages: list[ModelMessage],
model_settings: ModelSettings | None,
model_request_parameters: ModelRequestParameters,
) -> ModelResponse:
async for attempt in self.controller:
with attempt:
return await super().request(messages, model_settings, model_request_parameters)
raise RuntimeError('The retry controller did not make any attempts')
@asynccontextmanager
async def request_stream(
self,
messages: list[ModelMessage],
model_settings: ModelSettings | None,
model_request_parameters: ModelRequestParameters,
) -> AsyncIterator[StreamedResponse]:
if not self.stream_controller:
# No special retrying logic for streaming in this case:
async with super().request_stream(messages, model_settings, model_request_parameters) as stream:
yield stream
return
entered_stream = False
async for attempt in self.controller:
attempt.__enter__()
try:
async with super().request_stream(messages, model_settings, model_request_parameters) as stream:
entered_stream = True
attempt.__exit__(None, None, None)
yield stream
return
finally:
if not entered_stream:
attempt.__exit__(None, None, None)
raise RuntimeError('The retry controller did not make any attempts') |
PR Change SummaryAdded tenacity utilities for improved retry handling in HTTP requests, enhancing error resilience and user experience.
Added Files
How can I customize these reviews?Check out the Hyperlint AI Reviewer docs for more information on how to customize the review. If you just want to ignore it on this PR, you can add the Note specifically for link checks, we only check the first 30 links in a file and we cache the results for several hours (for instance, if you just added a page, you might experience this). Our recommendation is to add |
Docs Preview
|
I think that'd be better
I think so, save for some comments I left
I don't think we need it, acting directly on the HTTP client level is more powerful |
|
Watching you guys work is awesome. Appreciate this! Thank you! |
Satisfy linter
| def should_retry_status(response): | ||
| """Raise exceptions for retryable HTTP status codes.""" | ||
| if response.status_code in (429, 502, 503, 504): | ||
| response.raise_for_status() # This will raise HTTPStatusError |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i've been trying to follow this approach, but for some reason, when the code reaches this line it explodes because the request is not set on the response object.
i wonder if it's related to this new change
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@odedva Thanks for the report, can you please file a new issue for this?
This came up in a discussion in public slack with @mpfaffenberger — https://pydanticlogfire.slack.com/archives/C083V7PMHHA/p1752430089758299.
Some things to resolve before merging:
pydantic_ai.retriesor similar instead ofpydantic_ai.tenacity?Model? I have a tenacity-integratedWrapperModel, but I'm not sure if it's necessary/very useful on top of the async transport stuff currently included in this PR.