Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Http2: Hyper client gets stuck if too many requests are spawned #2419

Closed
staticgc opened this issue Feb 5, 2021 · 17 comments
Closed

Http2: Hyper client gets stuck if too many requests are spawned #2419

staticgc opened this issue Feb 5, 2021 · 17 comments

Comments

@staticgc
Copy link

staticgc commented Feb 5, 2021

I have a golang https server to which a rust client sends lot of parallel requests.
After few hundreds of requests, it stops and apparently there is no TCP communication.

The REST API accepts a byte buffer in a body and responds back with its length in a json.

To reproduce, the entire code is at: https://github.com/staticgc/hyper-stuck
It has golang server, Rust Client & Golang Client
The build instructions are very straightforward

The default max number of futures spawned are 400
This apparently is more than the max number of http2 streams at server (which is 250)

If the count is reduced to 200 it works.

Other observations:

  1. Golang client works with high goroutine count
  2. Increasing connections in Rust client from 1 to say 3 causes more requests to be processed but again gets stuck
  3. With 400 futures/requests, when the number of connections = 10 (chosen arbitrarily) it worked ok.
  4. When HTEST_BUF_SIZE is reduced to say 1KB then even with 1 connection & high future count it works
  5. If the rust client stops, it does so after predictable number of requests

Number of connections here refers to http2 negotiated connection which hyper (I think) creates only 1 per Client instance.
So changing HTEST_CONN_COUNT changes the number of Client instances that are created.

Also to prevent initial flooding at the server, the rust client makes a single http2 request on each Client instance and then issues the parallel requests.

for i in 0..conn_count {
     send_req_https(client_vec[i].clone(), bufsz, url.as_str()).await?;
}

Able to reproduce on: Mac OS & Cent OS Linux

Edit: Apologies for some typos in build instructions in the repo above. I have fixed those. Let me know here if anything remains.

@staticgc
Copy link
Author

staticgc commented Feb 9, 2021

I think I might have found the issue.

When the number of active streams exceed 250 (which is the default limit), the future polled here returns Poll::Pending. Now the parent future i.e. ClientTask does not get woken up again when stream count goes below the threshold.

So the poll here is not called. Now in my program there there is upper limit to number of futures (controlled by a semaphore) and it waits for existing streams to be done with. All the wake-ups done to the unbounded channel req_tx are already consumed so there is no one to wake-up the future.

@seanmonstar

@staticgc
Copy link
Author

staticgc commented Feb 11, 2021

Here is a work-around:

Wrap hyper::client::Client and guard the request method with a semaphore.

If more parallel requests are needed, create multiple CustomClient

Edit: This workaround is only for HTTP/2

use hyper::{Body, {client::HttpConnector}};
use hyper_openssl::HttpsConnector;
use tokio::sync::Semaphore;

#[derive(Clone)]
struct CustomClient {
    client: Client<HttpsConnector<HttpConnector>>,
    guard: Arc<Semaphore>,
}

impl CustomClient {
    fn new() -> Result<Self, Error> {
        let client = CustomClient::new_hyper_client()?;
        let guard = Arc::new(Semaphore::new(240)); //Harcoded limit just below threshold 250

        Ok(CustomClient{
            client,
            guard,
        })
    }

    async fn request(&self, req: Request<Body>) -> Result<Response<Body>, Error> {
        //Is allowed to make request? If not then wait.
        let _permit = self.guard.acquire().await?;

        let rsp = self.client.request(req).await?;
        Ok(rsp)
    }
}

@cynecx
Copy link

cynecx commented Mar 20, 2021

Ping. Somehow I came across this issue trying out some stuff between go and tokio/hyper/h2. Since it seems like an issue with h2, it would make sense to move the issue there?

@staticgc
Copy link
Author

staticgc commented Mar 20, 2021

Ping. Somehow I came across this issue trying out some stuff between go and tokio/hyper/h2. Since it seems like an issue with h2, it would make sense to move the issue there?

@seanmonstar can u pls advise?

@silence-coding
Copy link
Contributor

silence-coding commented Nov 1, 2021

How's this going? Is someone working on it?

@fasterthanlime
Copy link
Contributor

Here's a nice self-contained reproduction for this bug: https://github.com/fasterthanlime/h2-repro

@jfourie1
Copy link
Contributor

FWIW I observed that an existing waker is silently dropped here. In other words the following assert would fail if added to the top of the function : assert!(self.send_task.is_none())

@jfourie1
Copy link
Contributor

As a quick test / hack I added the following to the start of the function mentioned above :

        if let Some(task) = self.send_task.take() {
            task.wake();
        }

This seems to fix the issue for me but still trying to figure out what is going on. Won't recommend just blindly applying this patch.

@jfourie1
Copy link
Contributor

Some further analysis:

Looking at this loop's first iteration the following seems to be happening for the request that gets stuck:
The stream is not marked as pending yet so this returns Ok(()).
The stream is marked pending and added to the pending queue via the call to send_request() here
This poll() here calls this, storing a waker in self.send_task. For the next iteration of the loop the stream is marked as pending so this will return Poll::Pending and immediately overwrite the above waker.

@seanmonstar
Copy link
Member

Thank you so much for debugging this. Once you feel comfortable the patch is no longer blind, I'd be happy to merge a PR. (Mega bonus points if it becomes clear how to trigger this condition in a unit test, but not required.)

jeromegn added a commit to jeromegn/h2 that referenced this issue Oct 23, 2022
@jfourie1
Copy link
Contributor

I spent some more time on this today and would definitely not recommend applying the previous "patch". With that patch applied the request that would have gotten stuck is effectively busy polled until it completes, which is not ideal. The real problem/bug is that this function, specifically the call to self.inner.poll_pending_open() does not allow more than one waiter. In addition this single waiter is shared with this and this. So the current model of calling poll_ready() for multiple SendRequest clones before calling send_request() will cause this problem. I'll try to add some unit tests to illustrate the problem. It seems that what is needed is some mechanism to allow multiple concurrent tasks to wait for send stream capacity. One thought I had was to use a semaphore and change poll_ready() to poll this semaphore. This semaphore will be released once the request is done and the number of permits allowed on the semaphore will be set to max_send_streams. Does this make sense? Any other ideas?

jfourie1 added a commit to jfourie1/hyper that referenced this issue Nov 2, 2022
There exists a race condition in ClientTask::poll() when the request
that is sent via h2::client::send_request() is pending open. A task will
be spawned to wait for send capacity on the sendstream. Because this
same stream is also stored in the pending member of
h2::client::SendRequest the next iteration of the poll() loop can call
poll_ready() and call wait_send() on the same stream passed into the
spawned task.

Fix this by always calling poll_ready() after send_request(). If this
call to poll_ready() returns Pending save the necessary context in
ClientTask and only spawn the task that will eventually resolve to the
response after poll_ready() returns Ok.
@jfourie1
Copy link
Contributor

jfourie1 commented Nov 2, 2022

I created a pull request for this issue, please see here. The actual issue turned out to be slightly different than what I originally thought the problem was. Please see the commit log in the PR for further detail. The actual issue is a race condition in ClientTask::poll() and not a problem in the h2 crate.

seanmonstar pushed a commit that referenced this issue Nov 7, 2022
There exists a race condition in ClientTask::poll() when the request
that is sent via h2::client::send_request() is pending open. A task will
be spawned to wait for send capacity on the sendstream. Because this
same stream is also stored in the pending member of
h2::client::SendRequest the next iteration of the poll() loop can call
poll_ready() and call wait_send() on the same stream passed into the
spawned task.

Fix this by always calling poll_ready() after send_request(). If this
call to poll_ready() returns Pending save the necessary context in
ClientTask and only spawn the task that will eventually resolve to the
response after poll_ready() returns Ok.

Closes #2419
@ibotty
Copy link

ibotty commented Jan 3, 2023

I don't quiet understand. AFAICT #3041 is merged. Doesn't this fix the issue?

@jeromegn
Copy link

jeromegn commented Jan 4, 2023

I believe it can be closed.

@kundu-subhajit
Copy link

Hi @jfourie1, @seanmonstar, We want to reopen this issue. We tried the same code with semaphore value of 400, as in the repository, and we are able to hit the issue, even with https://github.com/hyperium/hyper/releases/tag/v0.14.27. Note, we have server and client running on different machines.

We do acknowledge that this same issue isn't reproducible with semaphore count below 250. However, in our local setup we have 100 as max number of http2 streams at server. But even if we try with semaphore count of 90, network communications through hyper are stuck.

Possibly, these issues are connected, and may have same RCA. Need your help to understand further.

@kundu-subhajit
Copy link

@jeromegn, @seanmonstar, @jfourie1 : Please guide us, how to reopen the issue?

@seanmonstar
Copy link
Member

I'd recommend opening a new issue, with whatever details you can provide. Let's leave this one alone.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

9 participants