Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

http: per-stream idle timeout. #3841

Merged
merged 7 commits into from
Jul 16, 2018
Merged

http: per-stream idle timeout. #3841

merged 7 commits into from
Jul 16, 2018

Conversation

htuch
Copy link
Member

@htuch htuch commented Jul 11, 2018

Fixes #1778.

Risk level: Medium. A very conservative 5 minute default idle timeout has been set, which should not affect most deployments with default timeout already kicking in for connection idle or upstream idle. This will however affect things like hanging POSTs.
Testing: Integration and unit tests added for various timeout scenarios.

Signed-off-by: Harvey Tuch htuch@google.com

Fixes envoyproxy#1778.

Risk level: Low (opt in).
Testing: Integration and unit tests added for various timeout scenarios.

Signed-off-by: Harvey Tuch <htuch@google.com>
Copy link
Contributor

@alyssawilk alyssawilk left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

YAY! I'm so excited to have idle timeouts already. Thanks for tackling this!

@@ -437,6 +439,22 @@ message RouteAction {
google.protobuf.Duration per_try_timeout = 3 [(gogoproto.stdduration) = true];
}

// Specifies the idle timeout for the route. If not specified, there is no
// stream level timeout. This is distinct to :ref:`timeout
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we think this is the right default? Given the lack of request timeouts I'd say having an overall timeout would be good, even if it's really high like the default response timeout.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm OK with setting something here like 30s. I think we should make it > default response timeout (15s), to reduce the chance of flakes in timeout due to idle vs. due to total response timeout. Does this work?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah I think a default here is fine as long as it's greater than the default of 15s. However, I don't think that's the whole story. For people that have increased the timeout above 15s (or disabled it), this could cause problems. So, I think we either have to not have a default here, or actually have the logic to make the default if not set at least as long as the configured request timeout. Thoughts?

Copy link
Member Author

@htuch htuch Jul 11, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There's also the per-connection idle timeout and per-retry timetouts to consider as well. I think if we want this to be unsurprising to existing users we will have to make it a max function of all the relevant timeouts. Might be easier to leave it to default off.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

TBH that's my feeling also. I would leave it without default.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm less worried about the trickle attack DoS vector (Envoy has plenty of those yet) and more about folks losing their connection without the FIN being transmitted. This is a real thing that happens frequently on internet facing traffic and will cause connection (so fd and memory) leakage without the buffer filter present. It's true that either having a default request timeout would address my concern as well, I was just hoping to fix the resource leakage sooner rather than later.
If you really think this should be off by default and the default timeout should be in the should-be-moved request timeout I'll cede the issue.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree this is a big problem, and I'm kicking myself for ending up where we are, but what's done is done. The issue mainly is that Lyft solves this problem w/ the buffer filter at the edge as we discussed offline. Totally agree we need a built-in timeout, but I'm also concerned that setting a default here is going to break people. I think we should have some big warning in the docs somewhere about this (where?) and I guess we could do a timeout here but IMO the default needs to be really high if we set one. E.g., like 5 minutes or something. Or we need to do some complex logic to infer the timeout somehow from the other timeouts people set.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think a default idle-timeout of 5 minutes (or longer) should be fine. I also agree that having a POST overall timeout is useful, but I think that one is impossible to have a reasonable default value for.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK, 5 minutes default, going once, twice, ...

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Works for me! As said I'm also fine landing and then changing the default in a separate PR (even a after a long window) if you want to have the risky change minimal.

// <envoy_api_field_route.RouteAction.idle_timeout>` instead bounds the amount
// of time the request's stream may be idle.
//
// After header decoding, and assuming that this is not a WebSocket upgrade,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Wait, why are we not doing this for websocket upgrade? I totally want forward-progress timeouts.
Did you mean assuming it's not an old-style tcp proxy upgrade?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, this is only for the TCP proxy upgrade scenario, will clarify.

// time an encode/decode event for headers or data is processed for the
// stream, the timer will be reset. If the timeout fires, the stream is
// terminated with a 408 Request Timeout error code if no upstream response
// header has been received, otherwise a stream reset occurs.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

At some point when we have priorities properly implemented we may need to take into account stream starvation here (if HighPriStream3 is consuming all outgoing bandwidth but LowPriStream5 is otherwise ready to write it shouldn't time out). Thankfully not something we have to worry about now.

Brainstorming other failure modes, if we have connections with very many streams, we might hit problems where the progress is super bunchy and causes artificial timeouts. I think basically when there's space in the write buffer, every stream is informed and can do a full read and write, and then won't be called upon again until all those writes drain. I don't know if there are enough users with backed up connections and many streams that it's worth calling out, but it's food for thought.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it's probably going to confuse more than help to call this out, even though I agree, at extreme queueing scenarios this can happen.

@@ -396,6 +396,25 @@ ConnectionManagerImpl::ActiveStream::~ActiveStream() {
ASSERT(state_.filter_call_state_ == 0);
}

void ConnectionManagerImpl::ActiveStream::resetIdleTimer() {
if (idle_timer_ != nullptr) {
idle_timer_->enableTimer(idle_timeout_ms_);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Have you tried a loadtest run with this on? I believe internally we ended up needing to latch "time of last data" and when the alarm fired resetting the alarm if necessary because the adds/removes were expensive enough we couldn't afford to thrash. I don't know if Envoy has a more efficient alarm management under the hood so I'm unsure if we'll have the same problem here.

Realistically if we leave it opt in or high timeouts, you're going to be the one who tracks down any perf problem so I'm fine with you landing first and seeing if it's a problem :-P

Copy link
Member Author

@htuch htuch Jul 11, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I discussed this with @mattklein123 earlier; we agreed that libevent timers are probably fine for now, and we should leave this as a TODO for later optimization. Will actually put a TODO there.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've thought about this (because I added idle-timeout to tcp proxy, with the same perf concern). I think we could probably make this fast enough if we add a tunable to timers for inaccuracy-percent or something. Then keep track of the current time it will expire, look at the new time, and if it has moved by less than X%, don't reschedule the timer.

But I decided not to bother until rescheduling timers showed up hot in a profile that I cared about.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's possible/likely that on super high throughput streaming case this timer reset behavior is going to show up on perf traces, but TBH I don't know how much and yeah I think it's probably fine for now. I would probably go for the simple approach first and we can make better later if needed?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

SGTM. As said Harvey ends up doing most of our perf tuning so if it shows up I'm sure he'll just deal with it later :-)

if (response_headers_ != nullptr) {
// TODO(htuch): We could send trailers here with an x-envoy timeout header
// or gRPC status code.
maybeEndEncode(true);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Isn't this the clean shutdown route? I'd expect resetStream().
I think if we do things this way the connection may be reused for a subsequent HTTP/1.1 request.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah we need to reset here (optimally w/ a relevant h2 reset code if one exists, not sure). I think this will end up resetting the stream, but I think it will also do some stat/log accounting to call the response complete which we don't want, so I think you will need to go deeper than this function and possibly call doEndStream() directly or something else.

@@ -882,6 +914,8 @@ void ConnectionManagerImpl::ActiveStream::encode100ContinueHeaders(

void ConnectionManagerImpl::ActiveStream::encodeHeaders(ActiveStreamEncoderFilter* filter,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

resetIdleTimer on encode100ContinueHeaders?

auto* route_config = hcm.mutable_route_config();
auto* virtual_host = route_config->mutable_virtual_hosts(0);
auto* route = virtual_host->mutable_routes(0)->mutable_route();
route->mutable_idle_timeout()->set_seconds(1);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You may want shorter idle timeouts but a scaleTimeoutForTsan/Asan function. Your call.

@@ -159,6 +159,11 @@ class HttpIntegrationTest : public BaseIntegrationTest {
void testDrainClose();
void testRetry();
void testRetryHittingBufferLimit();
IntegrationStreamDecoderPtr setupPerStreamIdleTimeoutTest();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Did I miss it or are there test of timeouts on the request path?
I don't think you need to do every stage as it's less interesting than the "have headers been sent" response path but it'd be good to have one.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1, testing the reset case is important.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The reset case is being tested, I can add a test for request headers + data sent, then timeout, which is missing. In general, with these tests, its not really implied whether you are timing out on the request or the response path, since the timeout could be due to either party not doing its thang.

auto* virtual_host = route_config->mutable_virtual_hosts(0);
auto* route = virtual_host->mutable_routes(0)->mutable_route();
route->mutable_idle_timeout()->set_seconds(1);
});
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's define this once and call it from both places, a la setAllowHttp10WithDefaultHost
Alternately I don't think it'd be a bad one to add into the config utils.

@@ -85,6 +85,20 @@ TEST_P(Http2IntegrationTest, TwoRequests) { testTwoRequests(); }

TEST_P(Http2IntegrationTest, Retry) { testRetry(); }

TEST_P(Http2IntegrationTest, PerStreamIdleTimeoutBeforeUpstreamHeaders) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Optional but I'd like to move away from the "add each test to 3 files" paradigm towards using HttpProtocolIntegrationTest

If we're going to leave the timeouts 1s I'd encourage moving these to their own file,
defining typedef HttpProtocolIntegrationTest IdleTimeoutsTest and just running all 4 without all the extra TEST_Ps.

Feel free to leave it here for consistency and I'll go split things out at some later point.

@@ -437,6 +439,22 @@ message RouteAction {
google.protobuf.Duration per_try_timeout = 3 [(gogoproto.stdduration) = true];
}

// Specifies the idle timeout for the route. If not specified, there is no
// stream level timeout. This is distinct to :ref:`timeout
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah I think a default here is fine as long as it's greater than the default of 15s. However, I don't think that's the whole story. For people that have increased the timeout above 15s (or disabled it), this could cause problems. So, I think we either have to not have a default here, or actually have the logic to make the default if not set at least as long as the configured request timeout. Thoughts?

// stream, the timer will be reset. If the timeout fires, the stream is
// terminated with a 408 Request Timeout error code if no upstream response
// header has been received, otherwise a stream reset occurs.
google.protobuf.Duration idle_timeout = 24
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are we concerned that people are going to want different idle timeouts for decode than for encode? I could see this happening w/ asymmetric streaming APIs. I agree that resetting the idle timeout in both cases means that a single value can be used, I'm just concerned that we might end up adding different timeouts later. Do we think this can happen or can we say it will never be needed? If we think it can happen should we future proof the API somehow to make this not awkward? Would rather not have to go through a deprecation cycle to deal with that.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I seem to recall we had this discussion for the TCP proxy idle timeouts (@ggreenway). What we have there is the bidi idle timeout (as in this PR), and then unidirectional idle timeouts (not implemented yet). I think ultimately you want both bidi and unidirectional as separate configurable options, since being able to only set the timeout in each directions has different semantics to bidi. E.g. if I configure 10s for encode and 10s for decode, I still don't have the same behavior as a 10s symmetrical timeout.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, that sounds correct.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK sgtm.

@@ -396,6 +396,25 @@ ConnectionManagerImpl::ActiveStream::~ActiveStream() {
ASSERT(state_.filter_call_state_ == 0);
}

void ConnectionManagerImpl::ActiveStream::resetIdleTimer() {
if (idle_timer_ != nullptr) {
idle_timer_->enableTimer(idle_timeout_ms_);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's possible/likely that on super high throughput streaming case this timer reset behavior is going to show up on perf traces, but TBH I don't know how much and yeah I think it's probably fine for now. I would probably go for the simple approach first and we can make better later if needed?

if (response_headers_ != nullptr) {
// TODO(htuch): We could send trailers here with an x-envoy timeout header
// or gRPC status code.
maybeEndEncode(true);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah we need to reset here (optimally w/ a relevant h2 reset code if one exists, not sure). I think this will end up resetting the stream, but I think it will also do some stat/log accounting to call the response complete which we don't want, so I think you will need to go deeper than this function and possibly call doEndStream() directly or something else.

@@ -159,6 +159,11 @@ class HttpIntegrationTest : public BaseIntegrationTest {
void testDrainClose();
void testRetry();
void testRetryHittingBufferLimit();
IntegrationStreamDecoderPtr setupPerStreamIdleTimeoutTest();
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1, testing the reset case is important.

Copy link
Contributor

@alyssawilk alyssawilk left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sweet! Only a few more comments on my end - if you get Matt's approval Friday (and verify tests are happy with 1k runs) don't wait to merge on my account. :-)

// Specifies the idle timeout for the route. If not specified, there is no
// stream level timeout. This is distinct to :ref:`timeout
// Specifies the idle timeout for the route. If not specified, this defaults
// to 5 minutes. The default value was select so as not to interfere with any
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yay!

Let's update the PR description and risk and such accordingly :-)

// stream, the timer will be reset. If the timeout fires, the stream is
// terminated with a 408 Request Timeout error code if no upstream response
// header has been received, otherwise a stream reset occurs.
// After header decoding, the idle timeout will apply on downstream and
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I know the existing hcm timeout applies when there are no streams. Do we have an open window where we've started reading headers for stream but have not completed reading headers for a stream where we will still leak stale connections?

Because we can't do a route-specific timeout until we have complete headers I don't believe you could fix any timeout-gap in this PR, but if it'd still exist it'd be good to file a tracking issue.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agree; I think a global stream idle timeout makes sense to fix this. I've filed #3853 and will do a followup PR once this one merges to introduce that.

TEST_P(IdleTimeoutIntegrationTest, PerStreamIdleTimeoutAfterData) {
auto response = setupPerStreamIdleTimeoutTest();

Sleep();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Mind giving this a flake check with
--jobs 60 --local_resources 100000000000,100000000000,10000000 --runs_per_test=1000
? I'd expect it to be flaky or slow, but I may just be biased against sleep tests :-P

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yep, passes with:

INFO: Elapsed time: 282.940s, Critical Path: 19.66s
INFO: 1001 processes: 1001 linux-sandbox.
INFO: Build completed successfully, 1002 total actions
//test/integration:idle_timeout_integration_test                         PASSED in 19.5s
  Stats over 1000 runs: max = 19.5s, min = 14.7s, avg = 16.4s, dev = 0.9s

HttpProtocolIntegrationTest::protocolTestParamsToString);

// Per-stream idle timeout when waiting for initial upstream headers.
TEST_P(IdleTimeoutIntegrationTest, PerStreamIdleTimeoutBeforeUpstreamHeaders) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you audit these tests for flakes?

I think a bunch of them are subject to the failure mode I've been fixing where Envoy will reset both upstream and downstream because [timeout] and then there's a race in cleanUpstreamAndDownstream because if the FIN arrives when we post the close() it registers as an unexpected disconnect.
(Alternately I'd be happy to say we shouldn't detect unexpected disconnect in cleanup and fix it once and for all there, but I think @mattklein123's preference is to have things which are closed explicitly handled.)
I believe fake_upstream_connection_->waitForDisconnect(); right after the code client closing will fix any of these.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Will fix by adding fake_upstreams_->set_allow_unexpected_disconnects(true); to setup.

testing::ValuesIn(HttpProtocolIntegrationTest::getProtocolTestParams()),
HttpProtocolIntegrationTest::protocolTestParamsToString);

// Per-stream idle timeout when waiting for initial upstream headers.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we update these descriptions? I'd say if we're sending a request-with-body we're not really waiting on upstream headers, it's more timing out waiting for downstream body.

Copy link
Member

@mattklein123 mattklein123 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, few small questions/comments. Thank for getting this done!

// to the introduction of this feature, while introducing robustness to TCP
// connections that terminate without FIN.
//
// The idle timeout is distinct to :ref:`timeout
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Perhaps also compare to connection idle timeout. Better, would it make sense to make a small timeout section in the HTTP arch overview or general config guide about timeouts, describe them all, and then link back to that section from all of these config options? It's only going to get more confusing as we add more configurable timeouts. Thoughts?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agreed. I'll take this as an action item for my followup when I introduce the global HCM idle timeout if that's OK.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure SGTM.

@@ -11,6 +11,8 @@ Version history
* health check: added support for :ref:`custom health check <envoy_api_field_core.HealthCheck.custom_health_check>`.
* health check: added support for :ref:`specifying jitter as a percentage <envoy_api_field_core.HealthCheck.interval_jitter_percent>`.
* health_check: added support for :ref:`health check event logging <arch_overview_health_check_logging>`.
* http: added support for a :ref:`per-stream idle timeout
<envoy_api_field_route.RouteAction.idle_timeout>`.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Perhaps say what the default is with a small warning about comparing to existing configured timeouts?

@@ -437,6 +439,27 @@ message RouteAction {
google.protobuf.Duration per_try_timeout = 3 [(gogoproto.stdduration) = true];
}

// Specifies the idle timeout for the route. If not specified, this defaults
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How does one fully disable it if they want? Do we tell them to set a really big value? Or should we support 0 to mean no timeout? Thoughts?

@@ -195,6 +195,7 @@ class AsyncStreamImpl : public AsyncClient::Stream,
return std::chrono::milliseconds(0);
}
}
absl::optional<std::chrono::milliseconds> idleTimeout() const override { return absl::nullopt; }
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Random orthogonal thought that might eventually want to plumb this through for real so that we could have idle timeouts for local origin requests/streams. No action now.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ack.

@htuch htuch merged commit 45fe83e into envoyproxy:master Jul 16, 2018
htuch added a commit to htuch/envoy that referenced this pull request Jul 17, 2018
This is a followup to envoyproxy#3841, where we introduce HCM-wide stream idle timeouts. This has two effects:

1. We can now timeout immediately after stream creation, potentially before receiving request
   headers and routing.

2. A default timeout can be configured across all routes. This is overridable on a per-route basis.

The default and overriding semantics are explained in the docs. Also added as a bonus some docs
about how timeouts interact more generally in Envoy.

Fixes envoyproxy#3853.

Risk Level: Low. While there is some change to the per-route vs. HCM wide semantics for stream idle
  timeouts, it's not anticipated this feature is in common use yet (it's only a couple of days since
  landing), and the caveats in envoyproxy#3841 with the new 5 minute default timeout should already apply.
Testing: Unit/integration tests added.

Signed-off-by: Harvey Tuch <htuch@google.com>
htuch added a commit that referenced this pull request Jul 23, 2018
This is a followup to #3841, where we introduce HCM-wide stream idle timeouts. This has two effects:

1. We can now timeout immediately after stream creation, potentially before receiving request headers and routing.

2. A default timeout can be configured across all routes. This is overridable on a per-route basis.

The default and overriding semantics are explained in the docs. Also added as a bonus some docs
about how timeouts interact more generally in Envoy.

Fixes #3853.

Risk Level: Low. While there is some change to the per-route vs. HCM wide semantics for stream idle
timeouts, it's not anticipated this feature is in common use yet (it's only a couple of days since
landing), and the caveats in #3841 with the new 5 minute default timeout should already apply.
Testing: Unit/integration tests added.

Signed-off-by: Harvey Tuch <htuch@google.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants