Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Rewrite LogStorageAppenderTest #9244

Closed
ChrisKujawa opened this issue Apr 28, 2022 · 2 comments · Fixed by #9303
Closed

Rewrite LogStorageAppenderTest #9244

ChrisKujawa opened this issue Apr 28, 2022 · 2 comments · Fixed by #9303
Assignees
Labels
area/test Marks an issue as improving or extending the test coverage of the project kind/toil Categorizes an issue or PR as general maintenance, i.e. cleanup, refactoring, etc. version:8.1.0-alpha2 version:8.1.0 Marks an issue as being completely or in parts released in 8.1.0

Comments

@ChrisKujawa
Copy link
Member

ChrisKujawa commented Apr 28, 2022

Description

Currently, the LogStorageAppenderTest is tightly coupled with implementation details. During moving the AtomixLogStorage #9235 we had to move the test as well and open the Writer implementation, which is sub-optimal.

We have to reevaluate again what we actually want to test here. The inconsistency check? Which is part of the ZeebeValidator and LeaderAppender? Then I guess we can test this differently, no need to use the LogStorageAppender here.

Potentially we can split up the test class and move parts back to the Logstreams tests and leave some in the broker to test the inconsistency check.

Related comment #9235 (comment)

@ChrisKujawa ChrisKujawa added kind/toil Categorizes an issue or PR as general maintenance, i.e. cleanup, refactoring, etc. team/distributed area/test Marks an issue as improving or extending the test coverage of the project labels Apr 28, 2022
@npepinpe npepinpe self-assigned this May 2, 2022
@npepinpe
Copy link
Member

npepinpe commented May 5, 2022

So I'll note that some observation.

  1. The LogStorageAppenderTest should 100% be in the logstreams module. The only test which prevents this is the invalid entry one, which can be tested as you mentioned directly with the entry validator.
  2. The ZeebeEntryValidator is a bit of a headache. It needs to know about ApplicationEntry (so Raft specific stuff), but also wants to read the actual logstream entry (as seen by its usage of LogEntryDescriptor). This sort of couples it to the implementation of the logstream. This is currently necessary because it iterates through the entry buffer and checks for gaps in the positions. We could replace this with just comparing the ApplicationEntry#lowestPosition/highestPosition of the both entries, but that check is less strict than actually going through the buffer and checking the actual positions for gaps. For example, you could have an ApplicationEntry with lowest = 10 and highest = 20, but then maybe one of the log entries in it has the wrong position.
  3. The LogStorageAppender is over complicated. It could be simplified by first splitting the non-actor logic from the actor specific logic, e.g. LogStorageAppenderActor deals with lifecycle, consuming channel, timers, etc., and LogStorageAppender is just appending things without any actor knowledge.
  4. The LogStorageAppender is tightly coupled to the Dispatcher, even though it actually only uses 1 method (peekBlock) and one class (BlockPeek), and even in that class it only uses one method getRawBuffer). This makes writing tests for it heavier, more complex, and flakier - you need to deal with a complete dispatcher. Instead, you could have Subscription implement a simple interface, e.g. ConsumableStream with the peekBlock method, and have BlockPeek implement a simple interface PeekBuffer with a single method getReadBuffer() which would return a read-only copy of the raw buffer. This would greatly simplify the tests and increase maintainability.
  5. I notice that the engine uses LogStreamBatchReaderImpl directly instead of getting it from the LogStream, which breaks abstraction.

Anyway, my proposal is:

  • Remove the shouldDetectInvalidEntry test from the LogStorageAppenderTest, move LogStorageAppenderTest back to logstreams.
  • For now, rewrite the ZeebeEntryValidator to just check the two ApplicationEntry's highest and lowest positions, and not know how to read the log entries.
  • Add a ZeebeEntryValidatorTest which tests that invalid entries are properly detected.

This would be the scope of this issue. I would propose a follow up issue to deal with the ZeebeEntryValidator. The crux of the issue here is that knowing how to read log entries is implementation detail of the log stream implementation, but we register/create the ZeebeEntryValidator when building the Raft instance. Since the LogStream implementation relies on the Raft itself as its log storage, there's a chicken and egg problem. I would propose instead to have the entry validator be register-able by the LogStream once it is installed. Conceptually I believe it makes sense - the log stream is installed on the Raft (with the implied assumption that it is the ONLY thing writing ApplicationEntry of course), and only it knows how to validate its own entries. This means there needs to be some new interface in the LogStorage to register an entry validator.

I would also propose to move the logic of validating entries in the storage itself. Right now this is done in the ZeebeLogAppender/LeaderRole, but it's unclear to me why this is necessary, as we don't use these anywhere else. My only argument here is this makes everything simpler (e.g. you register it in the storage, and then use right there in the append method - no need to propagate it), but I might be missing something.

@npepinpe
Copy link
Member

npepinpe commented May 5, 2022

To be honest we could also do the second issue first (e.g. moving the validator to logstreams), which then avoids having to rewrite the test a second time.

@ghost ghost closed this as completed in 97ef9ea May 6, 2022
@ghost ghost closed this as completed in #9303 May 6, 2022
ghost pushed a commit that referenced this issue May 6, 2022
9307: Clean up transforming some data classes to records r=npepinpe a=npepinpe

## Description

This PR cleans up some data classes from the Raft log/journal by transforming them to records. One thing I learned, Kryo does not play nice with records, so if we want to keep rolling updates, we cannot switch anything in the protocol to records. To that end, I moved the `PersistedRaftRecord` into the protocol sub package, since it's used only for passing over the network.

## Related issues

This came out of the PR to refactor the log storage appender tests, as I had to dig a bit into Atomix storage.

related to #9244 



9309: deps(go): bump github.com/docker/docker from 20.10.14+incompatible to 20.10.15+incompatible in /clients/go r=npepinpe a=dependabot[bot]

Bumps [github.com/docker/docker](https://github.com/docker/docker) from 20.10.14+incompatible to 20.10.15+incompatible.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/docker/docker/releases">github.com/docker/docker's releases</a>.</em></p>
<blockquote>
<h2>v20.10.15</h2>
<p>This release of Docker Engine comes with updated versions of the <code>compose</code>,
<code>buildx</code>, <code>containerd</code>, and <code>runc</code> components, as well as some minor bugfixes.</p>
<h3>Daemon</h3>
<ul>
<li>Use a RWMutex for stateCounter to prevent potential locking congestion <a href="https://github-redirect.dependabot.com/moby/moby/pull/43426">moby/moby#43426</a>.</li>
<li>Prevent an issue where the daemon was unable to find an available IP-range in
some conditions <a href="https://github-redirect.dependabot.com/moby/moby/pull/43360">moby/moby#43360</a></li>
</ul>
<h3>Packaging</h3>
<ul>
<li>Update Docker Compose to <a href="https://github.com/docker/compose/releases/tag/v2.5.0">v2.5.0</a>.</li>
<li>Update Docker Buildx to <a href="https://github.com/docker/buildx/releases/tag/v0.8.2">v0.8.2</a>.</li>
<li>Update Go runtime to <a href="https://go.dev/doc/devel/release#go1.17.minor">1.17.9</a>.</li>
<li>Update containerd (<code>containerd.io</code> package) to <a href="https://github.com/containerd/containerd/releases/tag/v1.6.3">v1.6.4</a>.</li>
<li>Update runc version to <a href="https://github.com/opencontainers/runc/releases/tag/v1.1.1">v1.1.1</a>.</li>
<li>Add packages for CentOS 9 stream and Fedora 36.</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="https://github.com/moby/moby/commit/4433bf67ba0a3f686ffffce04d0709135e0b37eb"><code>4433bf6</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/docker/docker/issues/43561">#43561</a> from thaJeztah/20.10_backport_bump_containerd_binar...</li>
<li><a href="https://github.com/moby/moby/commit/414a9e24a750fb3705fc9ad9426ed08e23538707"><code>414a9e2</code></a> update containerd binary to v1.6.4</li>
<li><a href="https://github.com/moby/moby/commit/0809f5fafb81bfa0d12dbcdf72eb022fba278771"><code>0809f5f</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/docker/docker/issues/43433">#43433</a> from thaJeztah/20.10_backport_update_containerd_runc</li>
<li><a href="https://github.com/moby/moby/commit/47b6a924b6475a093fe226e7bf5fa6a4f59bbe78"><code>47b6a92</code></a> update containerd binary to v1.6.3</li>
<li><a href="https://github.com/moby/moby/commit/6d7c2b2d2657ce409863fbd66355898145219b5d"><code>6d7c2b2</code></a> update containerd binary to v1.6.2</li>
<li><a href="https://github.com/moby/moby/commit/91708bf704faa6da840f572a98279c6803c17324"><code>91708bf</code></a> update containerd binary to v1.6.1</li>
<li><a href="https://github.com/moby/moby/commit/53ae17008eeff7f1cd856a7a3bdea417eefb074d"><code>53ae170</code></a> Revert &quot;[20.10] update containerd binary to 1.5.11&quot;</li>
<li><a href="https://github.com/moby/moby/commit/961b9a78d5d0e38976c66986f3b93a1c1e6a4780"><code>961b9a7</code></a> update runc binary to v1.1.1</li>
<li><a href="https://github.com/moby/moby/commit/97972dac5f1cb3532be5801e846629b607a6824c"><code>97972da</code></a> update runc binary to v1.1.0</li>
<li><a href="https://github.com/moby/moby/commit/2929771a531d58a9e467a9698c14587293112cbb"><code>2929771</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/docker/docker/issues/43499">#43499</a> from thaJeztah/20.10_bump_golang_1.17.9</li>
<li>Additional commits viewable in <a href="https://github.com/docker/docker/compare/v20.10.14...v20.10.15">compare view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=github.com/docker/docker&package-manager=go_modules&previous-version=20.10.14+incompatible&new-version=20.10.15+incompatible)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting ``@dependabot` rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- ``@dependabot` rebase` will rebase this PR
- ``@dependabot` recreate` will recreate this PR, overwriting any edits that have been made to it
- ``@dependabot` merge` will merge this PR after your CI passes on it
- ``@dependabot` squash and merge` will squash and merge this PR after your CI passes on it
- ``@dependabot` cancel merge` will cancel a previously requested merge and block automerging
- ``@dependabot` reopen` will reopen this PR if it is closed
- ``@dependabot` close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- ``@dependabot` ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- ``@dependabot` ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- ``@dependabot` ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)


</details>

Co-authored-by: Nicolas Pepin-Perreault <nicolas.pepin-perreault@camunda.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
@ChrisKujawa ChrisKujawa added the version:8.1.0 Marks an issue as being completely or in parts released in 8.1.0 label Oct 4, 2022
This issue was closed.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/test Marks an issue as improving or extending the test coverage of the project kind/toil Categorizes an issue or PR as general maintenance, i.e. cleanup, refactoring, etc. version:8.1.0-alpha2 version:8.1.0 Marks an issue as being completely or in parts released in 8.1.0
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants