-
Notifications
You must be signed in to change notification settings - Fork 13.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
KAFKA-17512: Move LogSegmentTest to storage module #17174
base: trunk
Are you sure you want to change the base?
Conversation
472b55e
to
d9d3b52
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi @xijiu
I have few minor comments, PTAL
"2147483648,4294967296" | ||
}) | ||
public void testAppendForLogSegmentOffsetOverflowException(long baseOffset, long largestOffset) throws IOException { | ||
LogSegment seg = createSegment(baseOffset, 10, Time.SYSTEM); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: It seems that in the Scala version, LogSegment
wasn't being closed, but I think we could leverage the try-with-resources
statement to make the test more stable and avoid potential issues.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please also apply to the following test if needed.
"100, 10", | ||
"2147483648, 0", | ||
"-2147483648, 0", | ||
"2147483648,4294967296" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: "2147483648, 4294967296"
to align the style with above csv parameters.
return new ProducerStateManager( | ||
topicPartition, | ||
logDir, | ||
5 * 60 * 1000, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: I think ((int) (Duration.ofMinutes(5).toMillis()))
is more readable, WYDT ?
69fbd56
to
8c71dcc
Compare
hi @frankvicky Thanks very much for code review, and those comments are very helpful for me, I have fixed them. PTAL BTW, the old |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@xijiu thanks for this patch. if you address all comments, the method iteratorToList
can be dropped as it gets useless
public void testAppendForLogSegmentOffsetOverflowException(long baseOffset, long largestOffset) throws IOException { | ||
try (LogSegment seg = createSegment(baseOffset, 10, Time.SYSTEM)) { | ||
long currentTime = Time.SYSTEM.milliseconds(); | ||
long shallowOffsetOfMaxTimestamp = largestOffset; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
please remove this unnecessary local variable
long currentTime = Time.SYSTEM.milliseconds(); | ||
long shallowOffsetOfMaxTimestamp = largestOffset; | ||
MemoryRecords memoryRecords = records(0, "hello"); | ||
assertThrows(LogSegmentOffsetOverflowException.class, () -> { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
please use lambda instead
|
||
// check that we can read back both messages | ||
FetchDataInfo read = seg.read(offset, 10000); | ||
assertEquals(Arrays.asList(ms1.records().iterator().next(), ms2.records().iterator().next()), iteratorToList(read.records.records().iterator())); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
please use assertIterableEquals
// Now truncate off the last message | ||
seg.truncateTo(offset + 1); | ||
FetchDataInfo read2 = seg.read(offset, 10000); | ||
assertEquals(1, iteratorToList(read2.records.records().iterator()).size()); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
all we want to verify is the content of records, so we can use assertIterableEquals(ms1.records(), read2.records.records());
instead
|
||
// Then we should still truncate the record that was present (i.e. offset + 3 is gone) | ||
FetchDataInfo log = seg.read(offset, 10000); | ||
assertEquals(offset, log.records.batches().iterator().next().baseOffset()); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Iterator<? extends RecordBatch> iter = log.records.batches().iterator();
assertEquals(offset, iter.next().baseOffset());
assertFalse(iter.hasNext());
@chia7712 Thanks for code review. I have fixed the code, PTAL |
As title
Committer Checklist (excluded from commit message)