Replies: 8 comments 22 replies
-
I would like to add that it is not my intent to question the deprecation of |
Beta Was this translation helpful? Give feedback.
-
I'm not as good at visualizing streams/tasks in my head as it seems you are, I think I would need to see some code to understand this approach. There might also be a workaround for 3.11+: #1258 (comment)
I had a proof of concept for this at one point: #1528
Have you had a chance to look at #1692? My logic here was that |
Beta Was this translation helpful? Give feedback.
-
Here is my starting proposal for fixing contextvars: jhominal#1 It is still a work in progress - in particular, as written, contextvars set in dispatch will not be visible in the endpoint, but I intend to fix that by providing an argument on |
Beta Was this translation helpful? Give feedback.
-
Here is my analysis of the Because of the way that Here are the various pieces of user code that interact with
Here is how the tasks are currently organized in starlette:
Which leads to this matrix of "who can read context variables set by whom":
My main issue with that matrix, is that In order to fix my main issue, I propose the following change in the way that user code is run in various tasks:
Such a change was implemented, as a work in progress, in jhominal#1 That change leads us to this matrix:
Looking at this matrix, there are two regressions brought by the change:
What does everyone else think? |
Beta Was this translation helpful? Give feedback.
-
After reading #1678 (comment) it appears that the following issue has also been identified in I think that fixing that race condition should be possible. |
Beta Was this translation helpful? Give feedback.
-
I have also continued to think about the issue of processing of request bodies in a
AsyncByteStream = typing.AsyncIterable[bytes]
AsyncByteStreamFunction = typing.Callable[[AsyncByteStream], AsyncByteStream]
class BaseHTTPMiddleware:
@staticmethod
def wrap_request_body(request: Request, func: AsyncByteStreamFunction) -> Request:
wrapped_stream_iterator = func(request.stream()).__aiter__()
stream_exhausted = False
async def wrapped_receive() -> Message:
nonlocal stream_exhausted
if not stream_exhausted:
try:
chunk = await wrapped_stream_iterator.__anext__()
return {"type": "http.request", "body": chunk, "more_body": True}
except StopAsyncIteration:
stream_exhausted = True
return {"type": "http.request", "body": b"", "more_body": False}
except ClientDisconnect:
stream_exhausted = True
return {"type": "http.disconnect"}
else:
return await request.receive()
return Request(scope=request.scope, receive=wrapped_receive, send=request._send)
@staticmethod
def wrap_response_body(response: Response, func: AsyncByteStreamFunction) -> Response:
if not isinstance(response, StreamingResponse):
raise ValueError(
"wrap_response_body can be used only on responses issued by call_next"
)
response_with_wrapped_content = StreamingResponse(
content=func(response.body_iterator),
status_code=response.status_code,
)
response_with_wrapped_content.raw_headers = response.raw_headers
return response_with_wrapped_content In other words, we could provide helper functions that help users of |
Beta Was this translation helpful? Give feedback.
-
Very interesting discussion! I built the official Sentry integration for Starlette and a user has now exact the problem of the request freezing because Sentry reads the request and then a See: getsentry/sentry-python#1573 Is there any work-around that I could implement to prevent the hanging from happening? Any pointers or hints are greatly appreciated. |
Beta Was this translation helpful? Give feedback.
-
Coming back to this. I think the state of the issues and their possible solutions are:
|
Beta Was this translation helpful? Give feedback.
-
When #1438 is fixed, the documentation explanation for deprecating
BaseHTTPMiddleware
should be reviewed, in particular its remaining bugs and limitations.Personally, I can think of two issues in the implementation of
BaseHTTPMiddleware
(once #1438 is fixed):BaseHTTPMiddleware
- that is the consequence of the downstream ASGI application being run in a task distinct from the upstreamapp
;BaseHTTPMiddleware.dispatch
reads from it, then the request body will not be available forcall_next
- there will be either a deadlock, the first packet will be missing, etc. - that is the consequence ofcall_next
hooking itself directly torequest.receive
, and of there being no attempt at buffering the request body (which, in my opinion, would be a bad idea);Does anyone have any other bugs/limitations in mind?
Now, what to do about the issues I mentioned?
dispatch
in a new task, and running the downstream ASGI app in the same task (no guarantee, it came to my mind as I was writing this post);BaseHTTPMiddleware
, instead of the current situation where the people who useBaseHTTPMiddleware
can be completely unaware of the way that they are breaking context variables for other applications and middlewares;BaseHTTPMiddleware
that do not make use of context variables would have nothing to change, and their middlewares would stop breaking other people's context variables;BaseHTTPMiddleware
, and document it more strongly, and I would even look into raising an exception ifdispatch
calls bothrequest.receive
(directly or indirectly) andcall_next
;Beta Was this translation helpful? Give feedback.
All reactions