-
Notifications
You must be signed in to change notification settings - Fork 3.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
DELETE query is very slow #9015
Comments
Sorry, I did not mean to close the issue. |
I am not sure why deleting is very slow, but shouldn't it be possible handle it by just storing that the values in a given period is deleted. So when querying the values are just skipped? And then when the compaction is ran, the values are finally removed. More than 10 seconds to delete four hours is a lot... |
I see that I the following code is blocking (block.txt) my deletes. Could the blocks be prevented somehow?
|
@e-dard -- is it possible to get this into the milestone for 1.4 or 1.4.1? |
@hpbieker we have made some significant changes recently to how deletes are handled, but it's getting late into the day to push that into 1.4. However, if the issue you're experiencing is a regression of some kind then it could be fixed and put into a patch release on either the 1.3 or 1.4 releases. @jwilder might have more insight into the issue you're experiencing. |
Hi again @e-dard and @jwilder, After digging a bit in the source code, it looks like the problem (at least one part) is that Engine.disableLevelCompactions() has to run before a delete is allowed. This will wait for all (!?!) compaction jobs on the engine to finish and prevent new ones to start. If we assume that a full compact of a shard is running while we try to delete a few samples in a different shard, the delete job will just wait for the compaction job to finish (such a job may take 15 minutes here) before the samples can be removed. Wouldn't it be better if:
It might be that I missed a few points here? |
@hpbieker Deletes were significantly reworked in #9084 which also improved performance. This will be a part of 1.5.
Shards are independent. There is an engine per shard so deleting samples in one shard is not affected by compaction in others. The systems resources (CPU/Disk/Memory) are shared so there is some throttling and limits to prevent many concurrent deletes across shards from adversely affecting the system.
That is essentially what we do. TSM files are immutable by design so we append a tombstone record to a |
Bug report
System info:
Influx 1.3.2
Steps to reproduce:
Run DELETE FROM measurement WHERE ...
Expected behavior:
The query below should be fast.
Actual behavior:
It is slow, and actually I get a few of these in my log:
The text was updated successfully, but these errors were encountered: