-
Notifications
You must be signed in to change notification settings - Fork 355
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Netty Connector's JerseyClientHandler should defer notification for arriving content #4285
Comments
The changes to The only concurrent cases where it even makes sense to have this ping-pong is when the Client attempts to consume the I am not sure the changes to Another important difference - retaining Netty's |
netty_defer_notify.zip The key idea of the refactoring is to separate concerns, and make it plain who is responsible for what. Like, Note Netty's Testware update: a couple of tests are not well-behaved apps :) They receive HTTP 200, but do not dispose of
|
This commit contains a rewritten implementation of Netty Jersey server and client with the following major changes: 1) Consume `ByteBuf`s from `HttpContent` as-is for processing, reduces a lot of overhead. 2) Fixes a bug on reading 0-byte or 1-byte JSON content 3) Add three new configurable `ChunkedInput` buffer properties through `jersey.ci.capacity`, `jersey.ci.read.timeout` and `jersey.ci.write.timeout` 4) Add configurable payload size through `jersey.max.http.request.entitySizeMb` , defaults to 50 Mb This change should fix some of the long pending bug reports in eclipse-ee4j#3500 , eclipse-ee4j#3568 and eclipse-ee4j#4286 . This may also resolve eclipse-ee4j#4285 Signed-off-by: Venkat Ganesh 010gvr@gmail.com
This commit contains a rewritten implementation of Netty Jersey server and client with the following major changes: 1) Consume `ByteBuf`s from `HttpContent` as-is for processing, reduces a lot of overhead. 2) Fixes a bug on reading 0-byte or 1-byte JSON content 3) Add three new configurable `ChunkedInput` buffer properties through `jersey.ci.capacity`, `jersey.ci.read.timeout` and `jersey.ci.write.timeout` 4) Add configurable payload size through `jersey.max.http.request.entitySizeMb` , defaults to 50 Mb This change should fix some of the long pending bug reports in eclipse-ee4j#3500 , eclipse-ee4j#3568 and eclipse-ee4j#4286 . This may also resolve eclipse-ee4j#4285 Signed-off-by: Venkat Ganesh 010gvr@gmail.com
This commit contains a rewritten implementation of Netty Jersey server and client with the following major changes: 1) Consume `ByteBuf`s from `HttpContent` as-is for processing, reduces a lot of overhead. 2) Fixes a bug on reading 0-byte or 1-byte JSON content 3) Add three new configurable `ChunkedInput` buffer properties through `jersey.ci.capacity`, `jersey.ci.read.timeout` and `jersey.ci.write.timeout` 4) Add configurable payload size through `jersey.max.http.request.entitySizeMb` , defaults to 50 Mb This change should fix some of the long pending bug reports in eclipse-ee4j#3500 , eclipse-ee4j#3568 and eclipse-ee4j#4286 . This may also resolve eclipse-ee4j#4285 Signed-off-by: Venkat Ganesh 010gvr@gmail.com
This commit contains a rewritten implementation of Netty Jersey server and client with the following major changes: 1) Consume `ByteBuf`s from `HttpContent` as-is for processing, reduces a lot of overhead. 2) Fixes a bug on reading 0-byte or 1-byte JSON content 3) Add three new configurable `ChunkedInput` buffer properties through `jersey.ci.capacity`, `jersey.ci.read.timeout` and `jersey.ci.write.timeout` 4) Add configurable payload size through `jersey.max.http.request.entitySizeMb` , defaults to 50 Mb This change should fix some of the long pending bug reports in eclipse-ee4j#3500 , eclipse-ee4j#3568 and eclipse-ee4j#4286 . This may also resolve eclipse-ee4j#4285 Signed-off-by: Venkat Ganesh 010gvr@gmail.com
@senivam I am not sure why this is closed. This issue is more than just the use of buffers. Please, justify. |
this solution (netty_defer_notyfy.zip) was integrated into Jersey within #4286. Afterwards there were some attempts to re-do thread-pulling using native Netty API but it appears the actual solution is more efficient. So, I would close this as resolved. |
@senivam makes sense. |
thank you, closing |
JerseyClientHandler
overrides onlychannelRead0
. Each socket read may produce more than onechannelRead0
invocation. This produces unnecessary wake ups for the thread processing the response:jersey/connectors/netty-connector/src/main/java/org/glassfish/jersey/netty/connector/JerseyClientHandler.java
Line 118 in 8dcbaf4
Response
headers become available, then this:jersey/connectors/netty-connector/src/main/java/org/glassfish/jersey/netty/connector/JerseyClientHandler.java
Line 136 in 8dcbaf4
Response
body chunk becomes available (or similar invocation forLastHttpContent
a few lines further down). The timings observed in the tests suggest that these two manage to wake up the thread twice even for trivial responses, wasting CPU on wake-up calls and context switches.JerseyClientHandler
should overridechannelReadComplete
, and defer the wake up until eitherLastHttpContent
becomes available, or suchchannelReadComplete
is invoked, whichever comes first.A proof of concept is available to demonstrate the reduction in CPU utilisation.
The text was updated successfully, but these errors were encountered: