-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ZFS resilver can be very slow if there are other heavy disk IO requests, can the resilver priority be adjusted? #11777
Comments
Have a look at EDIT: urlfix |
The latest change in this area was mine: #11166 . While the idea was actually opposite -- better throttle resilver to not affect the payload latency, there are number of tunables to allow adjustment, if needed. |
Thank you very much, justinianpopa ~ The document says For my understanding, In my case, if there are heavy read IO requests, it still affects ZFS resilver speed. eg: When a disk is being resilvering, if there is a rsync copying files out from the raidz, then the resilver becomes very slow. If the rsync is killed, the resilver becomes faster. Not sure if my understanding is correct. And I have tested the
About: #11166 , thank you very much amotin, I am not sure whether the problem I meet is caused by 4K random read (somehow related?) I am glad to continue to investigate and help. |
While i'm unsure if scrub/resilver IO is considered async or sync IO for the zfs scheduler, you may also try (as mentioned in the issue # above) tuning There may not be a tunable combination for exactly prioritising resilver reads however you might be able to tune scheduling of io requests to at least treat normal pool activity and resilvers in a balanced manner for your specific drives. For tuning, there also exists In my experience with ZFS on linux perf. tuning, you could also try setting the read_ahead_kb kernel tunable to 0 (per each block device in the pool, in /sys/block/*/queue/read_ahead_kb) to gain a few more useful IOPS as data is rarely useful from read aheads. It's worth a few combinations to try. |
The man page for zfs-module-parameters explicitly cites zfs_vdev_async_[...] as affecting resilver performance. It also explicitly suggests tuning zfs_vdev_scrub_max_active "will cause the scrub or resilver to complete more quickly,", so it should affect resilvers too. |
Thanks rincebrain. I will read the documents about "zfs_vdev_scrub_max_active" and try later. It may help many users if ZFS document has a topic about resilver performance. |
Describe the feature would like to see added to OpenZFS
Can the resilver IO priority be adjusted? It gives the users a chance to decide how to allocate IO resources.
In old ZFS there are module parameters like
zfs_resilver_delay
, however these parameters have been removed in latest versions.How will this feature improve OpenZFS?
ZFS resilver can be very slow if there are other heavy disk IO requests, it may strave and nearly stop working, and the resilver progress doesn't complete in several days.
The text was updated successfully, but these errors were encountered: