Skip to content

KAFKA-8940: decrease session timeout to make test faster and reliable#10871

Merged
guozhangwang merged 1 commit intoapache:trunkfrom
showuon:KAFKA-8940
Jun 13, 2021
Merged

KAFKA-8940: decrease session timeout to make test faster and reliable#10871
guozhangwang merged 1 commit intoapache:trunkfrom
showuon:KAFKA-8940

Conversation

@showuon
Copy link
Member

@showuon showuon commented Jun 13, 2021

While there might still be some issue about the test as described here by @ableegoldman , but I found the reason why this test failed quite frequently recently. It's because we increased the session timeout to 45 sec in KIP-735.

The failed messages is like this:

java.lang.AssertionError: tagg is missing
verifying suppressed min-suppressed
verifying min-suppressed with 10 keys
verifying suppressed sws-suppressed
verifying min with 10 keys
verifying max with 10 keys
verifying dif with 10 keys
verifying sum with 10 keys
verifying cnt with 10 keys
verifying avg with 10 keys

Or

java.lang.AssertionError: verifying tagg
fail: key=562 tagg=[ConsumerRecord(topic = tagg, partition = 0, leaderEpoch = 0, offset = 2, CreateTime = 1623347258886, serialized key size = 3, serialized value size = 8, headers = RecordHeaders(headers = [], isReadOnly = false), key = 562, value = 1)] expected=0
	 taggEvents: [ConsumerRecord(topic = tagg, partition = 0, leaderEpoch = 0, offset = 2, CreateTime = 1623347258886, serialized key size = 3, serialized value size = 8, headers = RecordHeaders(headers = [], isReadOnly = false), key = 562, value = 1)]
verifying suppressed min-suppressed
verifying min-suppressed with 10 keys
verifying suppressed sws-suppressed
verifying min with 10 keys
verifying max with 10 keys
verifying dif with 10 keys
verifying sum with 10 keys
verifying cnt with 10 keys
verifying avg with 10 keys
avg fail: key=7-1006 actual=300.5952380952381 expected=506.5

Or

java.lang.AssertionError: verifying tagg
fail: key=694 tagg=[ConsumerRecord(topic = tagg, partition = 0, leaderEpoch = 0, offset = 9, CreateTime = 1623338149617, serialized key size = 3, serialized value size = 8, headers = RecordHeaders(headers = [], isReadOnly = false), key = 694, value = 1)] expected=0
	 taggEvents: [ConsumerRecord(topic = tagg, partition = 0, leaderEpoch = 0, offset = 9, CreateTime = 1623338149617, serialized key size = 3, serialized value size = 8, headers = RecordHeaders(headers = [], isReadOnly = false), key = 694, value = 1)]
verifying suppressed min-suppressed
verifying min-suppressed with 10 keys
verifying suppressed sws-suppressed
verifying min with 10 keys
verifying max with 10 keys
verifying dif with 10 keys
verifying sum with 10 keys
verifying cnt with 10 keys
verifying avg with 10 keys

The failure are due to the processing is not completed in time as described below.

We can check the jenkins failing trend in trunk branch here:
image
This test never failed since build # 168, until build # 206 and later

The reason why increasing session timeout affected this test is because in this test, we will keep adding new stream clients and remove old one, to maintain only 3 stream clients alive. The problem here is, when old stream closed, we won't trigger rebalance immediately due to the stream clients are all static members as described in KIP-345, which means, we will trigger trigger group rebalance only when session.timeout expired. That said, when old client closed, we'll have at least 45 sec with some tasks not working.

Also, in this test, we have 2 timeout conditions to fail this test before verification passed:

  1. 6 minutes timeout
  2. polling 30 times (each with 5 seconds) without getting any data. (that is, 5 * 30 = 150 sec without consuming any data)

For (1), in my test under 45 session timeout, we'll create 8 stream clients, which means, we'll have 5 clients got closed. And each closed client need 45 sec to trigger rebalance, so we'll have 45 * 5 = 225 sec (~4 mins) of the time having some tasks not working.
For (2), during new client created and old client closed, it need some time to do rebalance. With 45 session timeout, we only got ~100 sec left. In slow jenkins env, it might reach the 30 retries without getting any data timeout.

Therefore, decreasing session timeout can make this test completes faster and more reliable.

Committer Checklist (excluded from commit message)

  • Verify design and implementation
  • Verify test coverage and CI build status
  • Verify documentation (including upgrade notes)

@showuon
Copy link
Member Author

showuon commented Jun 13, 2021

@guozhangwang @vvcephei @bbejeck , could you help review this PR? Thank you.

Copy link
Contributor

@guozhangwang guozhangwang left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the nice summary and analysis @showuon ! It makes sense to me.

@guozhangwang
Copy link
Contributor

@showuon
Copy link
Member Author

showuon commented Jun 13, 2021

All tests passed! YA!

@guozhangwang guozhangwang merged commit 4724083 into apache:trunk Jun 13, 2021
@ableegoldman
Copy link
Member

Thanks @showuon. Can you add a comment or update the description with the specific error message in the failure mode that this fix is intended to address? As you point out, my analysis of the test from a while back shows that we need to shore up either the input data production or the output verification itself to get this totally correct. You can detect when the failure is due to that bug in the test assumptions because the associated error is the java.lang.AssertionError: verifying tagg exception message.

It would be good to explicitly point out what kind of failure (ie the error message/exception/stacktrace) this fix was directed at, so we can keep an eye out for it and adjust the session timeout further if necessary. (I don't really expect it will, but you know how it is 🙂 )

@showuon
Copy link
Member Author

showuon commented Jun 15, 2021

@ableegoldman , thanks for good reminder. I totally agree with you. I've updated the PR description and in the JIRA ticket comment. Thank you.

mjsax added a commit to confluentinc/kafka that referenced this pull request Jun 15, 2021
Resolve merge conflicts in Jenkins file.


* MINOR: clean up unneeded `@SuppressWarnings` (apache#10855)

Reviewers: Luke Chen <showuon@gmail.com>, Matthias J. Sax <mjsax@apache.org>, Chia-Ping Tsai <chia7712@gmail.com>

* KAFKA-12940: Enable JDK 16 builds in Jenkins (apache#10702)

JDK 15 no longer receives updates, so we want to switch from JDK 15 to JDK 16.
However, we have a number of tests that don't yet pass with JDK 16.

Instead of replacing JDK 15 with JDK 16, we have both for now and we either
disable (via annotations) or exclude (via gradle) the tests that don't pass with
JDK 16 yet. The annotations approach is better, but it doesn't work for tests
that rely on the PowerMock JUnit 4 runner.

Also add `--illegal-access=permit` when building with JDK 16 to make MiniKdc
work for now. This has been removed in JDK 17, so we'll have to figure out
another solution when we migrate to that.

Relevant JIRAs for the disabled tests: KAFKA-12790, KAFKA-12941, KAFKA-12942.

Moved some assertions from `testTlsDefaults` to `testUnsupportedTlsVersion`
since the former claims to test the success case while the former tests the failure case.

Reviewers: Chia-Ping Tsai <chia7712@gmail.com>

* KAFKA-12921: Upgrade zstd-jni to 1.5.0-2 (apache#10847)

This PR aims to upgrade `zstd-jni` from `1.4.9-1` to `1.5.0-2`.

This change will incorporate a number of bug fixes and performance improvements made in `1.5.0` of `zstd`:
- https://github.com/facebook/zstd/releases/tag/v1.5.0
- https://github.com/luben/zstd-jni/releases/tag/v1.5.0-1
- https://github.com/luben/zstd-jni/releases/tag/v1.5.0-2

The most recent `1.5.0` release offers +25%-140% (compression) and +15% (decompression) performance
improvements under certain conditions. Those conditions are unlikely to apply to Kafka with the default
configuration, however.

Since this is a dependency change, this should pass all the existing CIs.

Reviewers: Lee Dongjin <dongjin@apache.org>, Ismael Juma <ismael@juma.me.uk>

* KAFKA-8940: decrease session timeout to make test faster and reliable (apache#10871)

While there might still be some issue about the test as described here by @ableegoldman , but I found the reason why this test failed quite frequently recently. It's because we increased the session timeout to 45 sec in KIP-735.

The reason why increasing session timeout affected this test is because in this test, we will keep adding new stream clients and remove old one, to maintain only 3 stream clients alive. The problem here is, when old stream closed, we won't trigger rebalance immediately due to the stream clients are all static members as described in KIP-345, which means, we will trigger trigger group rebalance only when session.timeout expired. That said, when old client closed, we'll have at least 45 sec with some tasks not working.

Also, in this test, we have 2 timeout conditions to fail this test before verification passed:

1. 6 minutes timeout
2. polling 30 times (each with 5 seconds) without getting any data. (that is, 5 * 30 = 150 sec without consuming any data)

For (1), in my test under 45 session timeout, we'll create 8 stream clients, which means, we'll have 5 clients got closed. And each closed client need 45 sec to trigger rebalance, so we'll have 45 * 5 = 225 sec (~4 mins) of the time having some tasks not working.
For (2), during new client created and old client closed, it need some time to do rebalance. With 45 session timeout, we only got ~100 sec left. In slow jenkins env, it might reach the 30 retries without getting any data timeout.

Therefore, decreasing session timeout can make this test completes faster and more reliable.

Reviewers: Guozhang Wang <wangguoz@gmail.com>

* MINOR: enable EOS during smoke test IT (apache#10870)

This IT has been failing on trunk recently. Enabling EOS during the integration test
makes it easier to be sure that the test's assumptions are really true during verification
and should make the test more reliable.

I also noticed that in the actual system test file, we are using the deprecated property
name "beta" instead of "v2".

Reviewers: Boyang Chen <boyang@apache.org>

* MINOR: Log formatting for exceptions during configuration related operations (apache#10843)

Format configuration logging during exceptions or errors. Also make sure it redacts sensitive information or unknown values.

Reviewers: Luke Chen <showuon@gmail.com>, David Jacot <djacot@confluent.io>

* KAFKA-12914: StreamSourceNode should return `null` topic name for pattern subscription (apache#10846)

Reviewers: Luke Chen <showuon@gmail.com>, Bruno Cadonna <bruno@confluent.io>, Guozhang Wang <guozhang@confluent.io>

* KAFKA-12948: Remove node from ClusterConnectionStates.connectingNodes when node is removed (apache#10882)

NetworkClient.poll() throws IllegalStateException when checking isConnectionSetupTimeout if all nodes in ClusterConnectionStates.connectingNodes aren't present in ClusterConnectionStates.nodeState. This commit ensures that when we remove a node from nodeState, we also remove from connectingNodes.

Reviewers: David Jacot <djacot@confluent.io>

* KAFKA-12701: NPE in MetadataRequest when using topic IDs (apache#10584)

We prevent handling MetadataRequests where the topic name is null (to prevent NPE) as
well as prevent requests that set topic IDs since this functionality has not yet been
implemented. When we do implement it  in apache#9769,
we should bump the request/response version.

Added tests to ensure the error is thrown.

Reviewers: dengziming <swzmdeng@163.com>, Ismael Juma <ismael@juma.me.uk>

Co-authored-by: Josep Prat <josep.prat@aiven.io>
Co-authored-by: Ismael Juma <ismael@juma.me.uk>
Co-authored-by: David Christle <dchristle@users.noreply.github.com>
Co-authored-by: Luke Chen <showuon@gmail.com>
Co-authored-by: John Roesler <vvcephei@users.noreply.github.com>
Co-authored-by: YiDing-Duke <dingyi.zj@gmail.com>
Co-authored-by: Rajini Sivaram <rajinisivaram@googlemail.com>
Co-authored-by: Justine Olshan <jolshan@confluent.io>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants