Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix(tls): flush send buffer in the background after closing TLS stream #10146

Merged
merged 1 commit into from
May 11, 2021

Conversation

piscisaureus
Copy link
Member

@piscisaureus piscisaureus commented Apr 12, 2021

In #9118, TLS streams were split into a "read half" and a "write half"
using tokio::io::split() to allow concurrent Conn#read() and
Conn#write() calls without one blocking the other. However, this
introduced a bug: outgoing data gets discarded when the TLS stream is
gracefully closed, because the read half is closed too early, before all
TLS control data has been received.

Fixes: #9692
Fixes: #10049
Fixes: #10296
Fixes: denoland/std#750

@bnoordhuis

This comment has been minimized.

@piscisaureus

This comment has been minimized.

runtime/ops/tls.rs Outdated Show resolved Hide resolved

impl Resource for TlsClientStreamResource {
impl Resource for TlsStreamResource {
fn name(&self) -> Cow<str> {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A bit of a tangent but for my edification: why Cow<str> instead of &'static str? Literally all implementers call into() on a &'static str.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good point actually. But I think changing that belongs in a separate commit.

runtime/ops/tls.rs Show resolved Hide resolved
runtime/ops/tls.rs Outdated Show resolved Hide resolved
runtime/ops/tls.rs Outdated Show resolved Hide resolved
runtime/ops/tls.rs Outdated Show resolved Hide resolved
runtime/ops/tls.rs Outdated Show resolved Hide resolved
runtime/ops/tls.rs Show resolved Hide resolved
Copy link
Member

@ry ry left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can tokio-rustls be removed from cli/Cargo.toml and test_util/Cargo.toml now?

@quininer
Copy link

tokio-rustls streams misbehave when split with tokio::io::split() into a "read half" and a "write half".

Hi, can you describe the problem? I am very interested in it.

ready!(self.poll_io(cx, Flow::Write))?;

match self.tls.write(buf) {
Ok(0) => unreachable!(),

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It may return 0 when the internal buffer of rustls is full.

Copy link
Member Author

@piscisaureus piscisaureus Apr 16, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That shouldn't happen, because poll_io() returns Poll::Pending when the buffer is full.
However it will return 0 when buf is empty, so I removed the assertion anyway.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Member Author

@piscisaureus piscisaureus Apr 16, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No, it will not return to pending. see https://github.com/ctz/rustls/blob/47d309233f2d21ccf058d35c58d684e5e4c9c643/rustls/src/server/mod.rs#L738

poll_io() returns Poll::Pending when the buffer is full:

true => Poll::Pending,

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry, I didn't see to poll_io, you are right.

unsafe { &mut *(buf.unfilled_mut() as *mut [MaybeUninit<u8>] as *mut _) };

match self.tls.read(buf_slice) {
Ok(0) => unreachable!(),

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it will return 0 at eof.

) -> Poll<io::Result<()>> {
ready!(self.poll_io(cx, Flow::Read))?;

if matches!(self.rd_shut, Shut::TlsShut | Shut::TcpShut) {

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it obscures the shutdown status of tcp stream and tls stream. When eof occurs in the tcp stream, there may still be readable data in the buffer of the tls stream.

runtime/ops/tls.rs Outdated Show resolved Hide resolved
runtime/ops/http.rs Outdated Show resolved Hide resolved
@@ -161,7 +161,7 @@ where

let tls_connector = TlsConnector::from(Arc::new(config));
let dnsname =
DNSNameRef::try_from_ascii_str(&domain).expect("Invalid DNS lookup");
DNSNameRef::try_from_ascii_str(&domain).expect("invalid hostname");
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is this related to tokio-rustls?

@@ -32,7 +32,7 @@ unitTest(

await assertThrowsAsync(async () => {
await Deno.connectTls({ hostname: "127.0.0.1", port: 3567 });
}, Error);
}, TypeError);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

related?

@@ -8,5 +10,8 @@ const listener = Deno.listenTls({
console.log("READY");

for await (const conn of listener) {
if (await conn.read(new Uint8Array(1)) !== null) {
throw new Error("should not receive data on TLS connection");
}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this demonstrating the bug this PR is fixing?

cli/tests/integration_tests.rs Outdated Show resolved Hide resolved
@piscisaureus
Copy link
Member Author

Can tokio-rustls be removed from cli/Cargo.toml and test_util/Cargo.toml now?

From CLI, yes.

But test_util and op_crates/deno_websocket still use tokio-rustls. It would be possible to switch them, but in order to do that we would have to move the TlsStream implementation to a separate crate, since neither test_util nor deno_websocket depend on deno_runtime, and it seems best to keep it that way.

Note that deno_websocket inherits the dependency from tokio-tungstenite, which uses a very similar mitigation for what appears to be the same problem.

@ry
Copy link
Member

ry commented May 4, 2021

@piscisaureus What's the status of this PR?

piscisaureus added a commit to piscisaureus/deno that referenced this pull request May 10, 2021
denoland#10146)

In denoland#9118, TLS streams were split into a "read half" and a "write half"
using tokio::io::split() to allow concurrent Conn#read() and
Conn#write() calls without one blocking the other. However, this
introduced a bug: outgoing data gets discarded when the TLS stream is
gracefully closed, because the read half is closed too early, before all
TLS control data has been received.

Fixes: denoland#9692
Fixes: denoland#10049
Fixes: denoland#10296
Fixes: denoland/std#750
@piscisaureus piscisaureus changed the title WIP: integrate rustls without using tokio-rustls fix(tls): flush send buffer in the background after closing TLS stream May 10, 2021
denoland#10146)

In denoland#9118, TLS streams were split into a "read half" and a "write half"
using tokio::io::split() to allow concurrent Conn#read() and
Conn#write() calls without one blocking the other. However, this
introduced a bug: outgoing data gets discarded when the TLS stream is
gracefully closed, because the read half is closed too early, before all
TLS control data has been received.

Fixes: denoland#9692
Fixes: denoland#10049
Fixes: denoland#10296
Fixes: denoland/std#750
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
4 participants