You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Overview
The current algorithm used by this client is to read a certain number of bytes from the response stream until an end of event is detected. At that point the event is parsed and emitted and the remainder of the bytes is added to a buffer. The number of bytes to read defaults to 1024 and is customizable by the user.
At one point in this clients history, a problem was noted when the number of bytes is greater than the size of an event. For example, imagine there is a single event in the stream of 1023 bytes. The read will then block until another byte is emitted from the server which can cause significant delays in the first event being emitted by the client.
In order to remedy this, a change was made to reach for a private variable of the requests library (response.raw._fp), in order to do short reads (read1), instead of reading from the main raw stream. This remedies the above because, now, read1 will attempt to read up to 1024 bytes in a single call and will not block if it cannot read that much, allowing events shorter than this size to be emitted as they are received.
Problems
This all works mostly but there are a number of problems with this approach:
This library now depends on an internal piece of the requests library which makes this library more brittle
response.raw._fp has not been processed by the requests library's chunked transfer encoding handling. This means it cannot be used if this encoding is being used and the client must fall back to using blocking reads
As a user, it is hard to know what chunk size I should specify, especially in the case of blocking reads and events that differ significantly in size.
Proposal
I believe all 3 problems can be fixed by instead reading the stream line by line, instead of with a given number of bytes.
response.raw supports readline so there is no longer any need to rely on response.raw._fp
response.raw has the chunked transfer encoding removed so there is no need for any specific handling when a server emits this.
There is no longer any need to specify a chunk size as the client can just block until an entire line is read.
Since Server Sent Events is a line oriented format, we can be sure it is always safe to block until an entire line is emitted. Consider the case where a server only emits some bytes but without any new lines. In that case, we know that the content cannot contain the end of an event and thus blocking until more bytes are read cannot result in any delayed event deliveries.
Alternatively, for backwards compatibility, a chunk size could continue to be accepted and passed into the readline causing it to return even earlier, if lines exceeded that length but I don't think this would have any practical benefit the option should be discouraged going forwards.
If you are happy with this suggestion, please let me know and I will submit a PR.
The text was updated successfully, but these errors were encountered:
Overview
The current algorithm used by this client is to read a certain number of bytes from the response stream until an end of event is detected. At that point the event is parsed and emitted and the remainder of the bytes is added to a buffer. The number of bytes to read defaults to 1024 and is customizable by the user.
At one point in this clients history, a problem was noted when the number of bytes is greater than the size of an event. For example, imagine there is a single event in the stream of 1023 bytes. The read will then block until another byte is emitted from the server which can cause significant delays in the first event being emitted by the client.
In order to remedy this, a change was made to reach for a private variable of the requests library (
response.raw._fp
), in order to do short reads (read1
), instead of reading from the mainraw
stream. This remedies the above because, now,read1
will attempt to read up to 1024 bytes in a single call and will not block if it cannot read that much, allowing events shorter than this size to be emitted as they are received.Problems
This all works mostly but there are a number of problems with this approach:
response.raw._fp
has not been processed by the requests library's chunked transfer encoding handling. This means it cannot be used if this encoding is being used and the client must fall back to using blocking readsProposal
I believe all 3 problems can be fixed by instead reading the stream line by line, instead of with a given number of bytes.
response.raw
supportsreadline
so there is no longer any need to rely onresponse.raw._fp
response.raw
has the chunked transfer encoding removed so there is no need for any specific handling when a server emits this.Since Server Sent Events is a line oriented format, we can be sure it is always safe to block until an entire line is emitted. Consider the case where a server only emits some bytes but without any new lines. In that case, we know that the content cannot contain the end of an event and thus blocking until more bytes are read cannot result in any delayed event deliveries.
Alternatively, for backwards compatibility, a chunk size could continue to be accepted and passed into the
readline
causing it to return even earlier, if lines exceeded that length but I don't think this would have any practical benefit the option should be discouraged going forwards.If you are happy with this suggestion, please let me know and I will submit a PR.
The text was updated successfully, but these errors were encountered: