Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix series file tombstoning. #10339

Merged
merged 1 commit into from
Oct 8, 2018
Merged

Conversation

benbjohnson
Copy link
Contributor

This commit fixes an issue with the series file compaction process where tombstones are lost after compaction and series existence checks are not correct. This commit also fixes some smaller flushing issues within the series file that mainly related to testing.

This commit fixes an issue with the series file compaction process
where tombstones are lost after compaction and series existence
checks are not correct. This commit also fixes some smaller flushing
issues within the series file that mainly related to testing.
} else if !sfile.IsDeleted(ids0[0]) {
t.Fatal("expected deletion after reopen")
}
}
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is the original failing test case that now works with the new existence check.

if _, ok := idx.tombstones[id]; ok {
return true
}
return idx.FindOffsetByID(id) == 0
}
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Tombstones only exist for IDs since the last compaction so we now check if the ID exists in the map as well.

idOffsetMap := make(map[uint64]int64)
for k, v := range idx.idOffsetMap {
idOffsetMap[k] = v
}
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since we now check for existence in the map & tombstones, we need to copy over the id/offset map during Clone() (which is used to snapshot the current index state).

if err := segment.Flush(); err != nil {
return err
}
}
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This was an unrelated bug but it caused series deletions to not be visible in compactions until another series was added afterward which caused the segment to be flushed. Now all deletions cause a segment flush.

@@ -572,14 +580,14 @@ func (c *SeriesPartitionCompactor) compactIndexTo(index *SeriesIndex, seriesN ui
return fmt.Errorf("unexpected series partition log entry flag: %d", flag)
}

// Save max series identifier processed.
hdr.MaxSeriesID, hdr.MaxOffset = id, offset

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is another unrelated bug but it caused deleted series to not move the max id/offset during compaction. This is now done before the deletion check.

Copy link
Contributor

@e-dard e-dard left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM 👍 Can you also make this PR to the Platform project (no need for the tooling changes).

@benbjohnson benbjohnson merged commit 1580f90 into master Oct 8, 2018
@ghost ghost removed the review label Oct 8, 2018
@benbjohnson benbjohnson deleted the bj-fix-series-index-tombstone branch October 8, 2018 14:35
@kevinhogan111 kevinhogan111 modified the milestones: 1.7.0, 1.6.4 Oct 11, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants