You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Files that get updated a lot, particularly if they're large, are poor candidates for compaction.
An obvious simple heuristic that seems likely to be relatively effective is to check file modification time, and simply have a configurable cutoff - if it's changed in the past n days, skip it. Tiering this by file size might also make sense - frequently decompressing and recompressing a small file is much less costly than a large one.
This sort of thing will be more important should Compactor migrate to a background system service with patrol compactions.
The text was updated successfully, but these errors were encountered:
Files that get updated a lot, particularly if they're large, are poor candidates for compaction.
An obvious simple heuristic that seems likely to be relatively effective is to check file modification time, and simply have a configurable cutoff - if it's changed in the past
n
days, skip it. Tiering this by file size might also make sense - frequently decompressing and recompressing a small file is much less costly than a large one.This sort of thing will be more important should Compactor migrate to a background system service with patrol compactions.
The text was updated successfully, but these errors were encountered: