You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Running with the fsx.c code from lustre-master/lustre/tests fsx is failing on every run on zfs master and the 0.6.4 release. The code was compiled as gcc -o fsx fsx.c
Setup test system:
#!/bin/bash
echo "remove old image files"
rm -f virtualdisk*.img
echo "create files to use as disks"
for i in 1 2 3 4 5
do
dd if=/dev/zero of=virtualdisk$i.img bs=512 count=1048576 &
done
wait
losetup -a
echo
echo "create loop back devices"
for i in 1 2 3 4 5
do
losetup /dev/loop$i virtualdisk$i.img
done
losetup -a
echo "0 1048576 linear /dev/loop1 0" | dmsetup create sane_dev1
echo "0 1048576 linear /dev/loop2 0" | dmsetup create sane_dev2
echo "0 1048576 linear /dev/loop3 0" | dmsetup create sane_dev3
echo "0 1048576 linear /dev/loop4 0" | dmsetup create sane_dev4
echo "0 1048576 linear /dev/loop5 0" | dmsetup create sane_dev5
ls -l /dev/mapper
Name="fsxtest"
Type="raidz"
Dev="/dev"
Devices="$Dev/dm-0 $Dev/dm-4 $Dev/dm-2 $Dev/dm-3"
Spare="$Dev/dm-1"
#
echo "zpool create -f $Name $Type $Devices spare $Spare"
zpool create -f $Name $Type $Devices spare $Spare
Then 10 fsx jobs were started using these options:
for i in 0 1 2 3 4 5 6 7 8 9
do
nohup ./fsx -d -S 1 -L /fsxtest/fsxnlite-testfile-$i-$Now > OUTPUT_fsxlite--$i-$Now 2>&1
done
Neither the -S or -L is necessary to get the failure to reproduce, we just added those to reduce the complexity.
Under stock cents 6.7 and zfs 0.6.4 all runs failed:
Running on both 0.6.4 and master it appears that if -W and -R (disable memory-mapped writes and reads) that fsx.c will run for hours. However, if memory-mapped files are enabled it fails fairly quickly.
All of the failure appear to have a similar signature where ~ffX of data is expected to be zeroed but yet it has actually data (below is snip from diff fsxtestfile.fsxgood.hexdump.txt fsxtestfile.hexdmp.txt):
We suspect their is some relation to #2976. When we compared xfsstress the fsx portion appears to be updated beyond the fsx.c in lustre/tests/fsx.c but they are similar.
The text was updated successfully, but these errors were encountered:
This issue should be resolved but it would be a good idea to add the test case proposed above in to the ZFS Test Suite. There is a similar test case in xfstests but as long as the test can be run relatively quickly the additional coverage would be useful.
Running with the fsx.c code from lustre-master/lustre/tests fsx is failing on every run on zfs master and the 0.6.4 release. The code was compiled as gcc -o fsx fsx.c
Setup test system:
Then 10 fsx jobs were started using these options:
Under stock cents 6.7 and zfs 0.6.4 all runs failed:
Here are some output samples:
Running on both 0.6.4 and master it appears that if -W and -R (disable memory-mapped writes and reads) that fsx.c will run for hours. However, if memory-mapped files are enabled it fails fairly quickly.
All of the failure appear to have a similar signature where ~ffX of data is expected to be zeroed but yet it has actually data (below is snip from diff fsxtestfile.fsxgood.hexdump.txt fsxtestfile.hexdmp.txt):
We suspect their is some relation to #2976. When we compared xfsstress the fsx portion appears to be updated beyond the fsx.c in lustre/tests/fsx.c but they are similar.
The text was updated successfully, but these errors were encountered: