Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

KAFKA-17512: Move LogSegmentTest to storage module #17174

Open
wants to merge 6 commits into
base: trunk
Choose a base branch
from

Conversation

xijiu
Copy link
Contributor

@xijiu xijiu commented Sep 12, 2024

As title

Committer Checklist (excluded from commit message)

  • Verify design and implementation
  • Verify test coverage and CI build status
  • Verify documentation (including upgrade notes)

@xijiu xijiu force-pushed the gh-move_LogSegmentTest_to_storage branch from 472b55e to d9d3b52 Compare September 12, 2024 10:46
Copy link
Collaborator

@frankvicky frankvicky left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @xijiu
I have few minor comments, PTAL

"2147483648,4294967296"
})
public void testAppendForLogSegmentOffsetOverflowException(long baseOffset, long largestOffset) throws IOException {
LogSegment seg = createSegment(baseOffset, 10, Time.SYSTEM);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: It seems that in the Scala version, LogSegment wasn't being closed, but I think we could leverage the try-with-resources statement to make the test more stable and avoid potential issues.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please also apply to the following test if needed.

"100, 10",
"2147483648, 0",
"-2147483648, 0",
"2147483648,4294967296"
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: "2147483648, 4294967296" to align the style with above csv parameters.

return new ProducerStateManager(
topicPartition,
logDir,
5 * 60 * 1000,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: I think ((int) (Duration.ofMinutes(5).toMillis())) is more readable, WYDT ?

@xijiu xijiu force-pushed the gh-move_LogSegmentTest_to_storage branch from 69fbd56 to 8c71dcc Compare September 14, 2024 02:10
@xijiu
Copy link
Contributor Author

xijiu commented Sep 14, 2024

hi @frankvicky

Thanks very much for code review, and those comments are very helpful for me, I have fixed them. PTAL

BTW, the old LogSegmentTest.scala add a new method testDeleteIfExistsWithGetParentIsNull, I have move it to LogSegmentTest.java

Copy link
Contributor

@chia7712 chia7712 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@xijiu thanks for this patch. if you address all comments, the method iteratorToList can be dropped as it gets useless

public void testAppendForLogSegmentOffsetOverflowException(long baseOffset, long largestOffset) throws IOException {
try (LogSegment seg = createSegment(baseOffset, 10, Time.SYSTEM)) {
long currentTime = Time.SYSTEM.milliseconds();
long shallowOffsetOfMaxTimestamp = largestOffset;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

please remove this unnecessary local variable

long currentTime = Time.SYSTEM.milliseconds();
long shallowOffsetOfMaxTimestamp = largestOffset;
MemoryRecords memoryRecords = records(0, "hello");
assertThrows(LogSegmentOffsetOverflowException.class, () -> {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

please use lambda instead


// check that we can read back both messages
FetchDataInfo read = seg.read(offset, 10000);
assertEquals(Arrays.asList(ms1.records().iterator().next(), ms2.records().iterator().next()), iteratorToList(read.records.records().iterator()));
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

please use assertIterableEquals

// Now truncate off the last message
seg.truncateTo(offset + 1);
FetchDataInfo read2 = seg.read(offset, 10000);
assertEquals(1, iteratorToList(read2.records.records().iterator()).size());
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

all we want to verify is the content of records, so we can use assertIterableEquals(ms1.records(), read2.records.records()); instead


// Then we should still truncate the record that was present (i.e. offset + 3 is gone)
FetchDataInfo log = seg.read(offset, 10000);
assertEquals(offset, log.records.batches().iterator().next().baseOffset());
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

            Iterator<? extends RecordBatch> iter = log.records.batches().iterator();
            assertEquals(offset, iter.next().baseOffset());
            assertFalse(iter.hasNext());

@xijiu
Copy link
Contributor Author

xijiu commented Sep 18, 2024

@chia7712 Thanks for code review. I have fixed the code, PTAL

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants