Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

TSI Fixes (1.3) #8485

Merged
merged 4 commits into from
Jun 13, 2017
Merged

TSI Fixes (1.3) #8485

merged 4 commits into from
Jun 13, 2017

Conversation

benbjohnson
Copy link
Contributor

Cherry picking for 1.3:

Required for all non-trivial PRs
  • Rebased/mergable
  • Tests pass
  • CHANGELOG.md updated

@@ -78,8 +78,9 @@ func (fs *FileSet) MustReplace(oldFiles []File, newFile File) *FileSet {

// Ensure all old files are contiguous.
for j := range oldFiles {
println("dbg/replace", len(fs.files), "//", i, j)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Remove?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed in 400147c.

if fs.files[i+j] != oldFiles[j] {
panic("cannot replace non-contiguous files")
panic(fmt.Sprintf("cannot replace non-contiguous files: subset=%+v, fileset=%+v", Files(oldFiles).IDs(), Files(fs.files).IDs()))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is MustReplace only used in tests?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No, this is called in a goroutine started by compact(). It's an invariant of the code and should not fail. It's an extra check to ensure the index does not get corrupted.

@jwilder jwilder added this to the 1.3.0 milestone Jun 13, 2017
This fixes the case where log files are compacted out of order
and cause non-contiguous sets of index files to be compacted.

Previously, the compaction planner would fetch a list of index files
for each level and compact them in order starting with the oldest
ones. This can be a problem for level 1 because level 0 (log files)
are compacted individually and in some cases a log file can finish
compacting before older log files are finished compacting. This
causes there to be a gap in the list of level 1 files that is
ignored when fetching a list of index files.

Now, the planner reads the list of index files starting from the
oldest but stops once it hits a log file. This prevents that gap
from being ignored.
@benbjohnson benbjohnson merged commit e9ca036 into influxdata:1.3 Jun 13, 2017
@benbjohnson benbjohnson deleted the tsi-1.3 branch June 13, 2017 22:11
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants