You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
If I had to guess, I would say that read throughput was low because of various seeks that needed to be done on the disks for metadata (that is already cached when doing writes) and the iostat figure is the combined sum for IOs done on all 3 disks. In that case, this is not a bug.
With that said, @behlendorf likely closed this because it is hard to see what is actionable here.
Exactly right. I closed this issue because it was over a year old, we've fixed a ton of stuff in the code, and there wasn't enough detail for us to do anything.
I have a specific use case that has unbelievably bad performance. Continuous reads from a single large media file, originally written to an otherwise quiescent array. The size of the file is roughly 8GB. The array, roughly 7TB.
If you take the average amount of data read from the array during playback, it totals roughly 2.3TB. No hyperbole.
Any objection to me opening a new issue about this?
Hi
i have a Kernel 3.0.4 with zfs 0.6.0-rc5 .
Write Performance on a raidz1 whit 3 x 2TB SATA Drivs is 240MB/sek
( dd if=/dev/zero of=/datapool bs=1M count=10000 )
( compress and dedup is Off )
Read performance is 70MB/sek ?
On zpool scrub i can see 300MB/sek Read ( iostat )
Any any ideas ?
Thanks for help !
The text was updated successfully, but these errors were encountered: