Skip to content

Commit

Permalink
traffic-resilience-http: Fix flaky testStopAcceptingConnections() test (
Browse files Browse the repository at this point in the history
#3125)

Motivation:

The `TrafficResilienceHttpServiceFilterTest.testStopAcceptingConnections(..)` test is flaky and sometimes will find that a connection times out before it was expected to. These additional connections are made to account for intrinsic races as well as kernel connection backlog behavior which is really hard to control.

Modifications:

The timeout behavior is the desired result, it just happens before we expect it to. So, instead of trying to make exactly two connections that we expect to succeed followed by one that should fail, we instead can just perform a 3 iteration loop and demand that we eventually stop accepting connections (presuming it's not a dry-run), which is the desired behavior.

Closes #3076.
  • Loading branch information
bryce-anderson authored Dec 5, 2024
1 parent 3b0f3c9 commit 74dc8c1
Showing 1 changed file with 39 additions and 36 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -56,7 +56,6 @@
import static io.servicetalk.capacity.limiter.api.CapacityLimiters.fixedCapacity;
import static io.servicetalk.concurrent.api.Single.succeeded;
import static io.servicetalk.concurrent.internal.DeliberateException.DELIBERATE_EXCEPTION;
import static io.servicetalk.concurrent.internal.TestTimeoutConstants.CI;
import static io.servicetalk.http.api.HttpProtocolVersion.HTTP_1_1;
import static io.servicetalk.http.netty.AsyncContextHttpFilterVerifier.verifyServerFilterAsyncContextVisibility;
import static io.servicetalk.http.netty.HttpProtocolConfigs.h1Default;
Expand Down Expand Up @@ -223,49 +222,53 @@ void testStopAcceptingConnections(final boolean dryRun, final String protocol) t
.dryRun(dryRun)
.build();

final HttpServerContext serverContext = HttpServers.forAddress(localAddress(0))
try (HttpServerContext serverContext = HttpServers.forAddress(localAddress(0))
.protocols(protocolConfig)
.listenSocketOption(SO_BACKLOG, TCP_BACKLOG)
.appendNonOffloadingServiceFilter(filter)
.listenStreamingAndAwait((ctx, request, responseFactory) ->
succeeded(responseFactory.ok().payloadBody(Publisher.never())));
succeeded(responseFactory.ok().payloadBody(Publisher.never())))) {

final StreamingHttpClient client = HttpClients.forSingleAddress(serverHostAndPort(serverContext))
.protocols(protocolConfig)
.socketOption(CONNECT_TIMEOUT, (int) SECONDS.toMillis(CI ? 4 : 2))
.buildStreaming();

// First request -> Pending 1
final StreamingHttpRequest meta1 = client.newRequest(HttpRequestMethod.GET, "/");
client.reserveConnection(meta1)
.flatMap(it -> it.request(meta1))
.concat(Completable.defer(() -> {
// First request, has a "never" pub as a body, we don't attempt to consume it.
// Concat second request -> out of capacity -> server yielded
final StreamingHttpRequest meta2 = client.newRequest(HttpRequestMethod.GET, "/");
return client.reserveConnection(meta2).flatMap(it -> it.request(meta2)).ignoreElement();
}))
.toFuture()
.get();
try (StreamingHttpClient client = HttpClients.forSingleAddress(serverHostAndPort(serverContext))
.protocols(protocolConfig)
.socketOption(CONNECT_TIMEOUT, (int) SECONDS.toMillis(2))
.buildStreaming()) {

// Netty will evaluate the "yielding" (i.e., auto-read) on this attempt, so this connection will go through.
assertThat(client.reserveConnection(client.newRequest(HttpRequestMethod.GET, "/"))
.toFuture().get().asConnection(), instanceOf(HttpConnection.class));
// First request -> Pending 1
final StreamingHttpRequest meta1 = client.newRequest(HttpRequestMethod.GET, "/");
client.reserveConnection(meta1)
.flatMap(it -> it.request(meta1))
.concat(Completable.defer(() -> {
// The response has a "never" pub as a body and we don't attempt to consume it.
// Concat second request -> out of capacity -> server yielded
final StreamingHttpRequest meta2 = client.newRequest(HttpRequestMethod.GET, "/");
return client.reserveConnection(meta2).flatMap(it -> it.request(meta2)).ignoreElement();
}))
.toFuture()
.get();

// This connection shall full-fil the BACKLOG=1 setting
assertThat(client.reserveConnection(client.newRequest(HttpRequestMethod.GET, "/"))
.toFuture().get().asConnection(), instanceOf(HttpConnection.class));
// We expect up to a couple connections to succeed due to the intrinsic race between disabling accepts
// and new connect requests, as well as to account for kernel connect backlog (effectively 1 for all
// OS's). That means we can have up to two connects succeed, but expect it to fail by the 3rd attempt.
for (int i = 0; i < 3; i++) {
if (dryRun) {
client.reserveConnection(client.newRequest(HttpRequestMethod.GET, "/")).toFuture().get()
.releaseAsync().toFuture().get();
} else {
try {
assertThat(client.reserveConnection(client.newRequest(HttpRequestMethod.GET, "/"))
.toFuture().get().asConnection(), instanceOf(HttpConnection.class));
} catch (ExecutionException e) {
assertThat(e.getCause(), instanceOf(ConnectTimeoutException.class));
// We saw the connection rejection so we succeeded.
return;
}
}
}

// Any attempt to create a connection now, should time out if we're not in dry mode.
if (dryRun) {
client.reserveConnection(client.newRequest(HttpRequestMethod.GET, "/")).toFuture().get()
.releaseAsync().toFuture().get();
} else {
try {
client.reserveConnection(client.newRequest(HttpRequestMethod.GET, "/")).toFuture().get();
fail("Expected a connection timeout");
} catch (ExecutionException e) {
assertThat(e.getCause(), instanceOf(ConnectTimeoutException.class));
if (!dryRun) {
fail("Connection was never rejected.");
}
}
}
}
Expand Down

0 comments on commit 74dc8c1

Please sign in to comment.