-
Notifications
You must be signed in to change notification settings - Fork 1.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
WebSocket FragmentExtension can produce an invalid stream of frames #2491
Comments
Bizarre. The stacktrace shows a parser Exception. |
I added some logging to the test and grepped everything out except the just the 'nextOutgoingFrame' debug logging in FragmentExtension (see log for failing test here). It appears as if everything happens initially on the 'server response' thread (which is what I called the single thread that does the send) neatly passing on 10 fragmented frames then a short full message frame. But after a while some stuff happens on a 'qtp523691575-17' thread. Not sure if this is relevant(?) |
From the wireshark, its the sending side that's messing up. What the testcase seems to be doing.
Initial blush: There's no regard for backpressure on the write side of this use case. Initial guess (needs more testing): Using sendString() from another thread isn't a great idea. |
You can follow along at the test project at - https://github.com/joakime/websocket-issue-2491 |
OK. Thanks. Glad you're seeing it too. |
The backpressure behaviors are not the cause. Need to dig into WebSocketRemoteEndpoint and FrameFlusher to see if the fault lies there. |
WebSocketRemoteEndpoint and FrameFlusher are not at fault. Taking a closer look at the FragmentExtension now. |
Note: we are planning on eliminating the FragmentExtension as its incompatible with the WebSocket over HTTP/2 drafts that enforce fragmentation anyway. |
+ Adding testcase for RemoteEndpoint + Adding testcase for FrameFlusher Signed-off-by: Joakim Erdfelt <joakim.erdfelt@gmail.com>
I've been able to narrow down the fault to the |
+ Adding testcase to ensure no regression + All data frames that arrive, are now sent through the IteratingCallback to ensure that that input frame order is preserved. Signed-off-by: Joakim Erdfelt <joakim.erdfelt@gmail.com>
Fixed in commit c596fca on branch Here's what was happening.
|
Well done for getting to the bottom of that one!
Will there be a release soon do you know, or should we build from sources?
Many thanks.
…On Thu, 26 Apr 2018 19:00 Joakim Erdfelt, ***@***.***> wrote:
Fixed in commit c596fca
<c596fca>
on branch jetty-9.4.x-issue-2491-ws-fragmentextension
Here's what was happening.
1. The FragmentExtension would enqueue data frames
(TEXT/BINARY/CONTINUATION) if they were over the configured maxLength.
2. The FragmentExtension had an internal flusher from that queue
processing large payloads out as fragments out with support for
backpressure.
3. We had a large frame in the queue, being processed.
4. A smaller frame arrives that is under the configured maxLength and
it skipped the queue and was immediately sent. (a bug that was just fixed)
5. The callbacks related to the original Frame was being notified of
success on every fragment being produced. (a bug that was just fixed)
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
<#2491 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AGJ5EWAID2qtsNqQDpTUg4m97srpVgKMks5tsgs4gaJpZM4Tk-Jf>
.
|
9.4.10 final release should occur sometime within the next 2 weeks. |
…entextension Issue #2491 WebSocket FragmentExtension produces out of order Frames
@jeremystone please try Jetty 9.4.10.RC1 (currently on maven central) |
9.4.10.RC1 looks good. Not been able to reproduce with this. Thanks again. |
No, thank you for the issue report with reproducer! (so important to get to the bottom of the issue) |
When a number of large fragmented messages interspersed with smaller non-fragmented messages are sent over a websocket to a slow client (or over a slow network), the client sometimes sees one of the smaller messages before the final fragment of the previously sent larger message.
The stack trace on the client is:
Note that we first noticed this with a jetty 9.4.9 server and a netty based client (and netty detects the same thing) so the problem is likely with the fragment creation (in this case on the server end) rather than with the fragment re-assembly on the client.
I have created this gist which is a JUnit test case that shows that this seems to happen only with 'slow' readers. (Our original netty/jetty setup required a moderately slow internet link or something like NetLimiter to reliably reproduce it.)
Please let me know if you need any further information.
The text was updated successfully, but these errors were encountered: