-
Notifications
You must be signed in to change notification settings - Fork 3.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[1.0.0-beta1] Full compaction never stops #6885
Comments
Seeing same behaviour in Log output is similar to yours, constant full compactions taking similar time of almost 2min each, per below attachment. |
The changes in #6952 may have fixed this. Would you be able to test that build and see if the issue is still occurring? |
As our series cardinality was over 1 million, we split our database in two and have two instances of influxd running now. After this change, we are not hitting the bug any more. I still have a copy of the files above that were being continuously compacted in case they are of any use. |
This should be fixed via #6952 |
Unfortunately, that fix doesn't appear to have worked. We are again seeing non-stop full compaction as above on 1.0.0-rc2. |
The fix is not in rc2. It will be in the 1.0 final and nightly master tomorrow. How are you testing it? |
Not testing as such - just noticed that we are seeing the problem again at present while running 1.0.0-rc2 with production data. |
We are running 1.0.0-beta1 with just over 1 million series populated from collectd. Server has 16 cores and 32GB of RAM. Writing less than 5000 points per second.
When full compaction starts after 24 hours, it never stops. Logs show full compaction is ongoing. Files appear to be compacted without change.
Here's an extract from the logs. It always spends about 80 seconds on a compaction run.
Here are the files being updated on disk:
000001067-000054314 and 000001067-000054315 have been compacted into 000001067-000054316 and 000001067-000054317 but the newly compacted files are identical to the old ones:
Closer look at files:
The text was updated successfully, but these errors were encountered: