Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Initial HTTP/2 support for Client and Server #1432

Merged
merged 1 commit into from
Apr 13, 2018
Merged

Initial HTTP/2 support for Client and Server #1432

merged 1 commit into from
Apr 13, 2018

Conversation

seanmonstar
Copy link
Member

@seanmonstar seanmonstar commented Feb 2, 2018

This starts to add HTTP/2 support to hyper in 0.12.

So far, this only adds support for prior knowledge HTTP/2. ALPN and HTTP/1 upgrades would be later additions.

Client

The Client pool has been modified to try to only use a single connection when connecting over HTTP/2. However, it's farther hacky, and in general, the pool module could be tossed out and tried again at some later date.

In this PR, the Client determines whether to use HTTP/1 or HTTP/2 based setting Builder::http2_only().

Server

The server cannot so far try to detect if incoming requests are HTTP/1 or HTTP/2, and instead requires that to be set via Http::http2_only().


If you're showing up wanting to try this out, you need this in your Cargo.toml:

[dependencies.hyper]
git = "https://github.com/hyperium/hyper"
branch = "h2"

And then, if using the client:

let client = Client::builder()
    .http2_only(true)
    .build_http();

If using the server:

let mut http = Http::new();
http.http2_only(true);

cc #304

@seanmonstar
Copy link
Member Author

I think this is rather usable. There's some "integration" tests that use the full stack, client and server, and some basic cases are passing.

I still don't feel the most confident about the client pool changes, but this could probably be merge this week.

_ => {

}
}
Copy link

@gootorov gootorov Feb 15, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@seanmonstar, Just randomly stumbled upon this repo. I think there's a way to simplify this if I'm not missing anything:

match (self.version, *version) {
    (Ver::Http1, _) if *version != HttpVersion::Http10 && *version != HttpVersion::Http11 => {
        *version = HttpVersion::Http11;
    },
    _ => {},
}

}
}

// body adpaters used by both Client and Server
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

adpaters -> adapters

fn poll(&mut self) -> Poll<Self::Item, Self::Error> {
loop {
// TODO: make use of flow control on SendStream
// If you're looking at this and thinking of tryig to fix this TODO,
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

tryig -> trying

@seanmonstar seanmonstar added this to the 0.12 milestone Feb 21, 2018
@dbrgn
Copy link
Contributor

dbrgn commented Mar 5, 2018

Is this supposed to work together with the latest version of hyper_tls? When I simply replace the latest released version of hyper with this git branch, I'm getting this error message:

error[E0599]: no method named `build` found for type `hyper::client::Config<hyper_tls::HttpsConnector<hyper::client::connect::HttpConnector>, hyper::Body>` in the current scope
   --> src/push/gcm.rs:109:10
    |
109 |         .build(&handle);
    |          ^^^^^
    |
    = note: the method `build` exists but the following trait bounds were not satisfied:
            `hyper_tls::HttpsConnector<hyper::client::connect::HttpConnector> : hyper::client::Connect`

This is the (HTTP 1 only) code:

    // Create async HTTP client instance    
    let https_connector = match HttpsConnector::new(4, &handle) {    
        Ok(conn) => conn,    
        Err(e) => return boxed!(future::err(    
            PushError::Other(format!("Could not create HttpsConnector: {}", e))    
        ))    
    };    
    let client = Client::configure()    
        .connector(https_connector)                                                 
        .build(&handle);                                                            

Or is this simply a version mismatch that could be resolved with a [patch] section in Cargo.toml?

@seanmonstar
Copy link
Member Author

@dbrgn that's because hyper-tls is depending on the crates.io version. You'll need to put [patch] in.

@pimeys
Copy link

pimeys commented Mar 5, 2018

Been running this on production for a couple of weeks with pretty big loads. Fast and works nicely, but some bugs might be worth to acknowledge if anybody else is testing... There is something fishy how hyper manages dead connections. It the remote end drops the connection, hyper gives a connection error and after that just silently sucks every request into the limbo. In other cases the connection just doesn't answer at all, but at least we can trigger a timeout and restart the connection.

P.S. if anybody wants to debug the h2/hyper with Apple pushes, all help is appreciated! https://github.com/pimeys/apns2

@dbrgn
Copy link
Contributor

dbrgn commented Mar 6, 2018

@dbrgn that's because hyper-tls is depending on the crates.io version. You'll need to put [patch] in.

Thanks! I had to do this:

# Cargo.toml
[patch.crates-io]
hyper = { version = "0.11.18", git = "https://github.com/hyperium/hyper", branch = "h2", features = ["http2"] }

And then:

cargo update -p "hyper:0.11.21" --precise 0.11.18

@dbrgn
Copy link
Contributor

dbrgn commented Apr 3, 2018

Edit: Disregard the comment below, it doesn't have anything to do with Hyper. Sorry!


I believe this is a problem with hyper and not with apns2: reown-com/a2#6

The application in question is a HTTP1 server with async Hyper, that uses a HTTP2 client to connect to APNS.

Simplified code:

impl Service for PushHandler {
    // Boilerplate for hooking up hyper's server types
    type Request = Request;
    type Response = Response;
    type Error = HyperError;

    // The future representing the eventual response
    type Future = BoxedFuture<Self::Response, Self::Error>;

    fn call(&self, req: Request) -> Self::Future {
        // ...
        let apns_future = apns::send_push(...);
        boxed!(apns_future)
    }
}

For some reason the h2 connection to the server is immediately closed. In the log I can find this line:

TRACE 2018-04-03T10:12:27Z: hyper::proto::h2::client: client tx dropped

This is the full log:

DEBUG 2018-04-03T10:12:27Z: rustls::client::hs: Server DNS name is DNSName("api.development.push.apple.com")
TRACE 2018-04-03T10:12:27Z: rustls::client::hs: Not sending CertificateVerify, no key
TRACE 2018-04-03T10:12:27Z: hyper::proto::h1::conn: flushed State { reading: KeepAlive, writing: Init, keep_alive: Busy, error: None, read_task: Some(Task) }
TRACE 2018-04-03T10:12:27Z: hyper::proto::h1::dispatch: Dispatcher::poll
TRACE 2018-04-03T10:12:27Z: hyper::proto::h1::conn: Conn::read_keep_alive
DEBUG 2018-04-03T10:12:27Z: rustls::client::hs: Session not saved
TRACE 2018-04-03T10:12:27Z: hyper::client::pool: Pool::put ("https://api.development.push.apple.com", Http2)
TRACE 2018-04-03T10:12:27Z: hyper::client::pool: Pool::put removing canceled parked ("https://api.development.push.apple.com", Http2)
DEBUG 2018-04-03T10:12:27Z: hyper::client::pool: pooling idle connection for ("https://api.development.push.apple.com", Http2)
TRACE 2018-04-03T10:12:27Z: hyper::proto::h1::conn: flushed State { reading: KeepAlive, writing: Init, keep_alive: Busy, error: None, read_task: Some(Task) }
TRACE 2018-04-03T10:12:27Z: hyper::proto::h1::dispatch: Dispatcher::poll
TRACE 2018-04-03T10:12:27Z: hyper::proto::h1::conn: Conn::read_keep_alive
TRACE 2018-04-03T10:12:27Z: hyper::proto::h1::conn: flushed State { reading: KeepAlive, writing: Init, keep_alive: Busy, error: None, read_task: Some(Task) }
TRACE 2018-04-03T10:12:27Z: hyper::proto::h2::client: client tx dropped
TRACE 2018-04-03T10:12:27Z: hyper::proto::h1::dispatch: Dispatcher::poll
TRACE 2018-04-03T10:12:27Z: hyper::proto::h1::conn: Conn::read_keep_alive
TRACE 2018-04-03T10:12:27Z: apns2::client: Request error: an operation was canceled internally before starting
 WARN 2018-04-03T10:12:27Z: push_relay::server: Error: Push message could not be processed: Push was unsuccessful: Error connecting to APNs
TRACE 2018-04-03T10:12:27Z: hyper::proto: should_keep_alive(version=Http11, header=None) = true
TRACE 2018-04-03T10:12:27Z: hyper::proto::h1::role: ServerTransaction::encode has_body=true, method=Some(Post)
TRACE 2018-04-03T10:12:27Z: hyper::proto::h1::encode: sized write, len = 19
DEBUG 2018-04-03T10:12:27Z: hyper::proto::h1::io: flushed 155 bytes
TRACE 2018-04-03T10:12:27Z: hyper::proto::h1::conn: maybe_notify; read_from_io blocked
TRACE 2018-04-03T10:12:27Z: hyper::proto::h1::conn: flushed State { reading: Init, writing: Init, keep_alive: Idle, error: None, read_task: Some(Task) }
TRACE 2018-04-03T10:12:27Z: hyper::proto::h1::dispatch: Dispatcher::poll
TRACE 2018-04-03T10:12:27Z: hyper::proto::h1::conn: Conn::read_head
DEBUG 2018-04-03T10:12:27Z: hyper::proto::h1::io: read 0 bytes
TRACE 2018-04-03T10:12:27Z: hyper::proto::h1::io: parse eof
TRACE 2018-04-03T10:12:27Z: hyper::proto::h1::conn: State::close_read()
DEBUG 2018-04-03T10:12:27Z: hyper::proto::h1::conn: read eof
TRACE 2018-04-03T10:12:27Z: hyper::proto::h1::conn: Conn::read_keep_alive
TRACE 2018-04-03T10:12:27Z: hyper::proto::h1::conn: parking current task
TRACE 2018-04-03T10:12:27Z: hyper::proto::h1::conn: maybe_notify; notifying task
TRACE 2018-04-03T10:12:27Z: hyper::proto::h1::conn: flushed State { reading: Closed, writing: Init, keep_alive: Disabled, error: None, read_task: Some(Task) }
TRACE 2018-04-03T10:12:27Z: hyper::proto::h1::conn: shut down IO
TRACE 2018-04-03T10:12:27Z: hyper::proto::h1::dispatch: Dispatch::poll done

Could this be a bad interaction between h1 server and h2 client?

@seanmonstar
Copy link
Member Author

I've just updated this branch with a rebasing to master (and minor fixes otherwise). If you were depending on the previous code somehow, I've pushed that to the 0.11.x-h2 branch, though I wouldn't expect that branch to live long.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants