-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Poor zvol discard performance. #6728
Comments
|
@ab-oe ouch that's a pretty dramatic performance hit. I assume that test was run on an otherwise idle system. Have you tried setting If that works we could consider doing something along the lines of disabling the throttle when there's not a significant amount of contending dirty data. The purpose of the throttle was to prevent free's from starving out writes. Let's get @alek-p's thoughts on this since he implemented the original patch and may have a better idea. |
@behlendorf I must admit that I didn't try setting |
This issue with this approach could be that once we disable throttle we may not let any more non-freeing dirty data into the TXG until the frees are done which would starve out writes again. To avoid this we could periodically re-enable the throttle and look at how much non-freeing dirty data gets in then but I would think this could lead to choppy performance. Instead of disabling throttle, perhaps setting Also, it now seems to me that we should calculate |
After looking at this a little bit more I'm no longer convinced that any specific limit is the real root cause for this. Specifically for a discard/free heavy only workload, like The throttle here in I suspect this is the reason why setting [edit] Using |
That makes a lot of sense, so what we need is to trigger a TXG rollover for the frees-only workload, without short circuting the TXG mechanics when the load is "normal". |
Not sure how viable this is but perhaps adding frees/discards as a new class to the I/O scheduler makes sense and could be made to satisfy the conditions referenced above |
We are currently experiencing this problem at datto and are working around it by setting |
@tcaputi reviewed, the patch looks good. I've just requested a comment be added. |
System information
Describe the problem you're observing
Since the commit 539d33c zvol discard operation time greatly fell down no matter how the
zfs_per_txg_dirty_frees_percent
parameter is set.Describe how to reproduce the problem
The easiest way is to run mkfs.ext4 which do the discard by default on the zvol.
On the 0.7.2 with reverted 539d33c the whole mkfs.ext4 on 1TiB thin provisioned zvol takes 6 seconds.
On official release this operation takes 2 minutes and 19 seconds:
For the 10TiB volume it took almost 20 minutes.
The text was updated successfully, but these errors were encountered: