-
Notifications
You must be signed in to change notification settings - Fork 16
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Modulate response output writes to 16KiB blocks #1790
Conversation
Block size of 16 KB is small enough to allow cipher implementations to become hot and optimize properly when given large inputs. Otherwise large array writes into a javax.crypto.CipherOutputStream fail to use intrinsified implementations. If 16 KB blocks aren't enough to produce hot methods, the I/O is small and infrequent enough that performance isn't relevant. For more information, see the details around com.sun.crypto.provider.GHASH::processBlocks in palantir/hadoop-crypto#586 (comment)
Generate changelog in
|
* in order to prevent degraded performance on large buffers as described in | ||
* <a href="https://github.com/palantir/hadoop-crypto/pull/586">hadoop-crypto#586</a>. | ||
*/ | ||
static final class ModulatingOutputStream extends FilterOutputStream { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Adjusted naming from ChunkingOutputStream
to ModulatingOutputStream
to avoid confusion with multipart chunks since this is just wrapping & writing to a single OutputStream in 16KB blocks
remaining -= toWrite; | ||
} | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Might be helpful to override write(int)
as a simple passthrough to out
and avoid array allocation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done, though I don't think it is actually necessary as FilterOutputStream#write(int)
just does out.write(b);
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah right, sorry for the runaround!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks!
Released 3.67.0 |
Allow for optimization when underlying input stream (such as ByteArrayInputStream, ChannelInputStream) overrides transferTo(OutputStream) to avoid extra array allocations and copy larger chunks at a time (e.g. allowing 16KiB chunks via ApacheHttpClientBlockingChannel.ModulatingOutputStream from #1790). When moving to JDK 21+, this will also enable 16KiB byte chunk copies via InputStream.transferTo(OutputStream) per JDK-8299336, where as on JDK < 21 and when using Guava ByteStreams.copy 8KiB byte chunk copies are used. References: * palantir/hadoop-crypto#586 * https://bugs.openjdk.org/browse/JDK-8299336 * https://bugs.openjdk.org/browse/JDK-8067661 * https://bugs.openjdk.org/browse/JDK-8265891 * https://bugs.openjdk.org/browse/JDK-8273038 * https://bugs.openjdk.org/browse/JDK-8279283 * https://bugs.openjdk.org/browse/JDK-8296431
…eam (#1983) BinaryRequestBody and ContentBody use InputStream.transferToOutputStream Allow for optimization when underlying input stream (such as `ByteArrayInputStream`, `ChannelInputStream`) overrides `transferTo(OutputStream)` to avoid extra array allocations and copy larger chunks at a time (e.g. allowing 16KiB chunks via `ApacheHttpClientBlockingChannel.ModulatingOutputStream` from #1790). When running on JDK 21+, this also enables 16KiB byte chunk copies via `InputStream.transferTo(OutputStream)` per JDK-8299336, where as on JDK < 21 and when using Guava `ByteStreams.copy` 8KiB byte chunk copies are used. References: * palantir/hadoop-crypto#586 * https://bugs.openjdk.org/browse/JDK-8299336 * https://bugs.openjdk.org/browse/JDK-8067661 * https://bugs.openjdk.org/browse/JDK-8265891 * https://bugs.openjdk.org/browse/JDK-8273038 * https://bugs.openjdk.org/browse/JDK-8279283 * https://bugs.openjdk.org/browse/JDK-8296431
Before this PR
When Dialogue would execute
RequestBody#writeTo
, aRequestBody
implementation may attempt to write a large buffer to the resulting output stream causing suboptimal cipher throughput due to the large size.See palantir/hadoop-crypto#586 & palantir/hadoop-crypto#587
After this PR
==COMMIT_MSG==
Modulate response output writes to 16KiB blocks
Block size of 16 KB is small enough to allow cipher implementations to become hot and optimize properly when given large inputs. Otherwise large array writes into a javax.crypto.CipherOutputStream fail to use intrinsified implementations. If 16 KB blocks aren't enough to produce hot methods, the I/O is small and infrequent enough that performance isn't relevant. For more information, see the details around com.sun.crypto.provider.GHASH::processBlocks in
palantir/hadoop-crypto#586 (comment)
==COMMIT_MSG==
Possible downsides?