-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Slow file copy for big files. #687
Comments
Please post the output of |
$ sudo zdb ------------------------------------------- tank: version: 28 name: 'tank' state: 0 txg: 4 pool_guid: 14708313474365326385 hostname: 's0' vdev_children: 1 vdev_tree: type: 'root' id: 0 guid: 14708313474365326385 create_txg: 4 children[0]: type: 'raidz' id: 0 guid: 3208061938074886841 nparity: 1 metaslab_array: 31 metaslab_shift: 36 ashift: 12 asize: 10001923440640 is_log: 0 create_txg: 4 children[0]: type: 'disk' id: 0 guid: 8180987024594158286 path: '/dev/disk/zpool/d1-part1' whole_disk: 1 create_txg: 4 children[1]: type: 'disk' id: 1 guid: 13711974958319910791 path: '/dev/disk/zpool/d2-part1' whole_disk: 1 create_txg: 4 children[2]: type: 'disk' id: 2 guid: 13904306676428230775 path: '/dev/disk/zpool/d3-part1' whole_disk: 1 create_txg: 4 children[3]: type: 'disk' id: 3 guid: 6731424362151096911 path: '/dev/disk/zpool/d4-part1' whole_disk: 1 create_txg: 4 children[4]: type: 'disk' id: 4 guid: 9282870102680286776 path: '/dev/disk/zpool/d5-part1' whole_disk: 1 create_txg: 4 |
new daily build of ZFS even worse then previous (previous provides around 50~60MB/s). ~$ dd if=/dev/zero of=test count=100000 on ZFS raidz: $ dd if=/dev/zero of=test count=100000 the result is 10 folds slower.... |
By any chance, are you using a Command Based Switching port multiplier? |
All disks in this ZFS installation directly connected to this motherboard: http://www.supermicro.com/Aplus/motherboard/Opteron6000/SR56x0/H8QGi-F.cfm |
Which Linux distribution are you using? What is your kernle version? Is your system BIOS up to date? Is your distribution up to date? |
Hi Ryao, I checked bios it is up to date too. by the way to Unmount ZFS during reboot I wait for 30 min and press reset button.... |
Just addition, When I just install ZFS (January) the speed was ~200MB/s and on peak around 400MB/s. But it had issues with removing large files and stability. |
Try using dd with bs=16384 as a parameter. The throughput should improve. As for waiting 30 minutes, that is a separate issue. I imagine that you would want to discuss that with @dajhorn. |
Hi Ryao, The question is how to make so great performance for standard linux command like "cp" as it has speed around 11MB/s? |
You are using ashift=12, so all write operations are 4KB in size. With 5 disks in raidz, you have 4 data disks, so the minimal stripe size is 16KB. If your files are less than 16KB in size, the overhead of writes will still be approximately the cost of 16KB, which is what is hurting performance. You can improve performance by adding a SSD as a SLOG device to your vdev. Then write performance on both sequential and random operations should be close to the sequential write performance of the SSD. |
I had 11-50MB/s when copy 16GB file. is it possible to improve without SSD drive? |
I tested copying a 1GB file from a tmpfs to a pool containing a single 6-disk raidz2 vdev composed of Samsung HD204UI disks and the transfer time was 1.512 seconds. The transfer rate was 677.2 MB/sec. It might be that you are encountering seek overhead. Are the reads and writes for that occurring inside a single pool? |
reading from ZFS and storing on regular drive - 102MB/s. |
I'm having this same problem. Copying large files within the same pool is very slow, but copying to/from the pool to/from an external source is as fast as expected. What is the solution to this? Would a LOG device actually make a difference (I don't want to waste my money)? Is there anything else that can be done? |
@rbabchis How slow? When copying files within a pool the drives are going to be contending for read and writes. This is going to impact performance to some degree. |
It bounces around a lot, but averages to about 50MB/sec (using rsync locally). Sometimes it crawls as slow as 25MB/sec without any apparent reason. I've tried and failed to determine why. bonnie++ shows about 150MB/sec read, 100MB/sec write, 50MB/sec rewrite. |
I have also noticed this problem. I have 4 1TB SSDs, split into two mirrors. I can read at about 1GB/s and write around 600MB/s sustained. However doing a cp the speed is capped at 12MB/s. Painfully slow. Working with a 40gig file, it is 20x faster to copy the file to another partition on the drives and copy it back. Doing it this way results in a copy at a speed of around 400MB/s in both directions. This also removes the theory of drive io being the bottle neck. It is 100% a problem of copy on the same zpool. Each drive is partitioned exactly the same
And here is a copy of my zdb output.
|
@georgyo are you intentionally using ashift=9 ? http://open-zfs.org/wiki/Performance_tuning#Alignment_Shift_.28ashift.29 |
@kernelOfTruth I did not specify an ashift when I created the zpools. The SSDs in question are Samsung 850 pros. Which as far as I can tell have a sector size of 512. I will bump up the ashift to 13 and report the results, but it will take some time to migrate all the data around.
|
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov> Requires-spl: refs/pull/687/head
This patch contains no functional changes. It is solely intended to resolve cstyle warnings in order to facilitate moving the spl source code in to the zfs repository. Reviewed-by: Giuseppe Di Natale <dinatale2@llnl.gov> Reviewed by: George Melikov <mail@gmelikov.ru> Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov> Closes openzfs#687
Was this successfully? (i have the same issue) |
I'm having the exact same issue as @rbabchis, copying a file from a dataset to another folder in the dataset or to another dataset is very slow, 20-50 MB/s with rsync. Using fio set to direct gives me sequential rw at around 80 MB/s. Sequential reads and sequential writes are both around 250 MB/s. I'm using zfs 0.8.0-rc3. |
I have raidz with 5 disks. and system can only provides 53MB/sec copy on them.
~53MB/sec
The text was updated successfully, but these errors were encountered: