Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Some test cases do not pass with real disks #6939

Closed
tcaputi opened this issue Dec 8, 2017 · 16 comments · Fixed by #7261
Closed

Some test cases do not pass with real disks #6939

tcaputi opened this issue Dec 8, 2017 · 16 comments · Fixed by #7261
Labels
Component: Test Suite Indicates an issue with the test framework or a test case

Comments

@tcaputi
Copy link
Contributor

tcaputi commented Dec 8, 2017

The following tests seem to always fail when asking zfs-tests to use real disks instead of the default loop devices:

  • zdb_003_pos, zdb_004_pos, zdb_005_pos
  • zpool_create_001_pos, zpool_create_002_pos
  • inuse_005_pos, inuse_008_pos, inuse_009_pos

System information

Type Version/Name
Distribution Name Ubuntu
Distribution Version 16.04
Linux Kernel 4.4.0-96-generic
Architecture x86_64
ZFS Version 0.7.3
SPL Version 0.7.3

Describe the problem you're observing

The tests listed above seem to fail consistently on raw disks.

Describe how to reproduce the problem

Run zfs-tests.sh while specifying real disks via the $DISKS environment variable.

@tcaputi tcaputi added the Component: Test Suite Indicates an issue with the test framework or a test case label Dec 8, 2017
@tcaputi
Copy link
Contributor Author

tcaputi commented Dec 8, 2017

Looked into these issues a little bit and here's what I have learned:

  • The zdb tests are failing because zpool create will make partitions on real disks and will not on loop devices. The zdb -l commands fail to find the labels because they check the full disk instead of the partition.

  • The inuse tests fail because they are not properly setting the disk and slice variables. The string manipulation that is done when setting these variables does not expect $SLICE_PREFIX to be an empty string. For loop devices, the prefix is as 'p' (as in /dev/loop0p1), but it is an empty string for raw disks (as in /dev/sdb).

@tcaputi
Copy link
Contributor Author

tcaputi commented Dec 13, 2017

While looking at why the zpool_create tests are failing and noticed it was because 2 of my VM's auxiliary disks are extremely tiny (2GB) and they weren't large enough to partition We should add a size check to the runner (similarly to how the runner will fail immediately if given less than 3 disks).

Incidentally, I tried to resolve this problem by using zvols to create 3 volumes with a minimum size of 3G, hoping that would be large enough to allow the tests to run. This actually caused zdb_002_pos to deadlock:

Commands to reproduce:

zpool create pool sdb sdc sdd # sdb is 8GB, sdc is 2GB, sdd is 2GB 
zfs create -V 3G pool/vol1
zfs create -V 3G pool/vol2
zfs create -V 3G pool/vol3
export DISKS='/dev/zvol/pool/vol1 /dev/zvol/pool/vol2 /dev/zvol/pool/vol3'
su tom -c '/usr/share/zfs/zfs-tests.sh -kx -r /media/sf_projects/zfs_crypto/dev.run'

Stack trace:

[45426.651479] INFO: task zpool:32221 blocked for more than 120 seconds.
[45426.651483]       Tainted: P           OE  3.19.0-25-generic #26~14.04.1-Ubuntu
[45426.651484] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[45426.651486] zpool           D ffff8801ca693958     0 32221  30748 0x00000000
[45426.651489]  ffff8801ca693958 ffff8800d9d089d0 0000000000013e80 ffff8801ca693fd8
[45426.651493]  0000000000013e80 ffff8801bacbe220 ffff8800d9d089d0 000000000e600001
[45426.651495]  ffff880215a79d58 ffff880215a79d5c ffff8800d9d089d0 00000000ffffffff
[45426.651499] Call Trace:
[45426.651505]  [<ffffffff817b27f9>] schedule_preempt_disabled+0x29/0x70
[45426.651509]  [<ffffffff817b44e5>] __mutex_lock_slowpath+0x95/0x100
[45426.651511]  [<ffffffff817b4555>] ? mutex_lock+0x5/0x37
[45426.651514]  [<ffffffff817b4573>] mutex_lock+0x23/0x37
[45426.651517]  [<ffffffff81224813>] ? __blkdev_get+0x63/0x4b0
[45426.651519]  [<ffffffff81224813>] __blkdev_get+0x63/0x4b0
[45426.651522]  [<ffffffff814f09b5>] ? put_device+0x5/0x20
[45426.651525]  [<ffffffff81224e7f>] blkdev_get+0x21f/0x340
[45426.651531]  [<ffffffff8120c5f4>] ? mntput+0x24/0x40
[45426.651533]  [<ffffffff81224c65>] ? blkdev_get+0x5/0x340
[45426.651537]  [<ffffffff811f5e42>] ? path_put+0x22/0x30
[45426.651539]  [<ffffffff81224057>] ? lookup_bdev.part.12+0x47/0x90
[45426.651542]  [<ffffffff81225226>] blkdev_get_by_path+0x56/0x90
[45426.651544]  [<ffffffff812251d5>] ? blkdev_get_by_path+0x5/0x90
[45426.651579]  [<ffffffffc07e10f0>] vdev_disk_open+0x370/0x400 [zfs]
[45426.651582]  [<ffffffff811f5e42>] ? path_put+0x22/0x30
[45426.651585]  [<ffffffff81207f9b>] ? iput+0x3b/0x180
[45426.651612]  [<ffffffffc07dd515>] vdev_open+0x175/0x870 [zfs]
[45426.651638]  [<ffffffffc07dd3a5>] ? vdev_open+0x5/0x870 [zfs]
[45426.651664]  [<ffffffffc07ddc70>] vdev_open_children+0x60/0x190 [zfs]
[45426.651691]  [<ffffffffc07f1a3f>] vdev_root_open+0x6f/0x150 [zfs]
[45426.651716]  [<ffffffffc07dd515>] vdev_open+0x175/0x870 [zfs]
[45426.651741]  [<ffffffffc07dd3a5>] ? vdev_open+0x5/0x870 [zfs]
[45426.651766]  [<ffffffffc07dde02>] vdev_create+0x22/0xb0 [zfs]
[45426.651803]  [<ffffffffc07c27a2>] spa_create+0x592/0xd30 [zfs]
[45426.651809]  [<ffffffffc053c485>] ? spl_vmem_free+0x5/0x10 [spl]
[45426.651813]  [<ffffffffc053c48e>] ? spl_vmem_free+0xe/0x10 [spl]
[45426.651838]  [<ffffffffc07c2215>] ? spa_create+0x5/0xd30 [zfs]
[45426.651840]  [<ffffffff817b2685>] ? _cond_resched+0x5/0x40
[45426.651870]  [<ffffffffc081424c>] zfs_ioc_pool_create+0x28c/0x300 [zfs]
[45426.651899]  [<ffffffffc0813fc5>] ? zfs_ioc_pool_create+0x5/0x300 [zfs]
[45426.651927]  [<ffffffffc0815325>] zfsdev_ioctl+0x635/0x760 [zfs]
[45426.651936]  [<ffffffffc02e8077>] ? 0xffffffffc02e8077
[45426.651964]  [<ffffffffc0814cf5>] ? zfsdev_ioctl+0x5/0x760 [zfs]
[45426.651967]  [<ffffffff811ffa58>] do_vfs_ioctl+0x2f8/0x510
[45426.651969]  [<ffffffff811ff765>] ? do_vfs_ioctl+0x5/0x510
[45426.651971]  [<ffffffff811fc4b9>] ? putname+0x29/0x40
[45426.651975]  [<ffffffff81308c05>] ? cap_file_ioctl+0x5/0x10
[45426.651977]  [<ffffffff811ffcf1>] SyS_ioctl+0x81/0xa0
[45426.651981]  [<ffffffff817b822e>] ? device_not_available+0x1e/0x30
[45426.651984]  [<ffffffff817b668d>] system_call_fastpath+0x16/0x1b

@tcaputi
Copy link
Contributor Author

tcaputi commented Feb 6, 2018

The zpool_create tests seem to be failing due to this issue: https://bbs.archlinux.org/viewtopic.php?id=202587

@bunder2015
Copy link
Contributor

bunder2015 commented Feb 6, 2018

mount: /dev/sdb1: more filesystems detected. This should not happen, use -t <type> to explicitly specify the filesystem type or use wipefs(8) to clean up the device.

Coincidentally I had this message on one of my machines yesterday when I tried to mount my ESP. I think it might be caused by not knowing which FS implementation to use, mount -t vfat worked just fine. edit: could fstab be missing entries?

@tcaputi
Copy link
Contributor Author

tcaputi commented Feb 6, 2018

it seems that this is caused because the standard Linux utilities don't cleanup all the signatures of the ZFS filesystem, so mount finds both and gets confused. We could probably solve this by manually wiping zfs ourselves.....

@behlendorf
Copy link
Contributor

Wiping it ourselves seems like an entirely reasonable fix.

@PaulZ-98
Copy link
Contributor

PaulZ-98 commented Feb 9, 2018

The zdb tests fail with disk devices (i.e. export DISKS="sdb sdc sdd") because dd and zdb can't use /dev/sdb. They need /dev/sdb1 as Tom mentioned above. I added the following to zdb_003_pos, zdb_004_pos and zdb_005_pos.

 set -A DISK $DISKS
 
+# set disk strings to use partition 1
+if is_linux && ! is_loop_device ${DISK[0]} && ! is_mpath_device ${DISK[0]}; then
+       for y in 0 1 ; do
+               DISK[$y]=${DISK[$y]}1
+       done
+fi
+
 default_mirror_setup_noexit $DISKS
 log_must dd if=/dev/${DISK[0]} of=/dev/${DISK[1]} bs=1K count=256 conv=notrunc

Loop devices work fine as is so are excluded from the appending code. Wasn't sure about mpath so excluded those too.

Anybody know of a cleaner or better way to take care of this?

@behlendorf
Copy link
Contributor

because dd and zdb can't use /dev/sdb.

I don't follow. Why exactly can't dd and zdb use /dev/sdb? How exactly do they fail? It's true they need to be smart enough to determine if a zpool create partitioned a device and they need to access that instead. But that seems like a straight forward thing to handle. And if that's the case it's a great reason to wrap up #6277 which would allow the auto-partitioning to be disabled.

@PaulZ-98
Copy link
Contributor

PaulZ-98 commented Feb 9, 2018

Right - it's not that they can't use /dev/sdb, it's that in these tests, dd and zdb -l don't find the labels using /dev/sdb once zpool create has auto-partitioned the disk. The commands do find the labels at /dev/sdb1.

So maybe the if statement in the fix should determine if the disk has been auto-partitioned.. but as of now isn't that always the case for non-loop devices?

@behlendorf
Copy link
Contributor

behlendorf commented Feb 9, 2018

It depends on the exact device type, but in general if the device can be partitioned it will be. How about using blkid to identify the device path which should be used.

@PaulZ-98
Copy link
Contributor

PaulZ-98 commented Feb 9, 2018

Something like this?

# blkid -po udev /dev/sdb |grep zfs
# blkid -po udev /dev/sdb1 |grep zfs
ID_PART_ENTRY_NAME=zfs-fd4936b09f37a500

@behlendorf
Copy link
Contributor

behlendorf commented Feb 9, 2018

I'd suggest using the pool guid to lookup its vdevs. This will match the UUID token from blkid For example,

$ zpool create tank vdc vdd
$ zpool export tank
$ zpool import
   pool: tank
     id: 11884075113069035057
  state: ONLINE
 action: The pool can be imported using its name or numeric identifier.
 config:

	tank        ONLINE
	  vdc       ONLINE
	  vdd       ONLINE
$ blkid -t UUID=11884075113069035057
/dev/vdc1: LABEL="tank" UUID="11884075113069035057" UUID_SUB="12433243056806542889" TYPE="zfs_member" PARTLABEL="zfs-5bf10f21bd53d331" PARTUUID="166c25f8-b703-b94c-aa00-312272270975" 
/dev/vdd1: LABEL="tank" UUID="11884075113069035057" UUID_SUB="14263448663866137408" TYPE="zfs_member" PARTLABEL="zfs-bdafa5272894b1b0" PARTUUID="4d373b24-c141-5447-b7a4-4e5221bf9cf4" 

@PaulZ-98
Copy link
Contributor

PaulZ-98 commented Feb 10, 2018

added code to zdb_003_pos.ksh to override "DISK"

DEVS=$(get_pool_devices ${TESTPOOL} ${DEV_RDSKDIR})
[[ -n $DEVS ]] && set -A DISK $DEVS

and a new function to include/blkdev.sh

#
# Get actual devices used by the pool (i.e. linux sdb1 not sdb).
#
function get_pool_devices #testpool #devdir
{
        typeset testpool=$1
        typeset devdir=$2
        typeset guid
        typeset out=""

        if is_linux; then
                guid=$(zpool get -H guid $testpool | awk '{print $3}')
                zpool export $testpool
                out=$(blkid -t UUID=${guid} | sort | cut -f1 -d' ' | sed 's/.\{1\}$//' | tr '\n' ' ')
                out=$(echo $out | sed -e "s|${devdir}/||g")
                zpool import $testpool
        fi
        echo $out
}

Or should the new function go in libtest.shlib?

@behlendorf
Copy link
Contributor

Adding it to include/blkdev.sh sounds good. You shouldn't need to export/import the pool either in the function. Incidentally, an alternate way to get the same path information is with the zpool status -P option. Either approach is OK with me.

@PaulZ-98
Copy link
Contributor

PaulZ-98 commented Feb 12, 2018

Thanks! Great suggestion using zpool status -P. That cleans it up quite a bit.

#
# Get actual devices used by the pool (i.e. linux sdb1 not sdb).
#
function get_pool_devices #testpool #devdir
{
        typeset testpool=$1
        typeset devdir=$2
        typeset out=""

        if is_linux; then
                out=$(zpool status -P $testpool |grep ${devdir} | awk '{print $1}')
                out=$(echo $out | sed -e "s|${devdir}/||g" | tr '\n' ' ')
        fi
        echo $out 
}

@PaulZ-98
Copy link
Contributor

The issue mentioned above where a pool created over zvols hangs, is a duplicate (or at least similar to)
#6145

PaulZ-98 added a commit to datto/zfs that referenced this issue Mar 2, 2018
Due to zpool create auto-partioning in Linux (i.e. sdb1),
certain utilities need to use the parition (sdb1) while
others use the whole disk name (sdb).
Fixes openzfs#6939

Authored-by: Paul Zuchowski <pzuchowski@datto.com>
Signed-off-by: Paul Zuchowski <pzuchowski@datto.com>
PaulZ-98 added a commit to datto/zfs that referenced this issue Mar 5, 2018
Due to zpool create auto-partioning in Linux (i.e. sdb1),
certain utilities need to use the parition (sdb1) while
others use the whole disk name (sdb).
Fixes openzfs#6939

Authored-by: Paul Zuchowski <pzuchowski@datto.com>
Signed-off-by: Paul Zuchowski <pzuchowski@datto.com>
behlendorf pushed a commit that referenced this issue Mar 8, 2018
Due to zpool create auto-partioning in Linux (i.e. sdb1),
certain utilities need to use the parition (sdb1) while
others use the whole disk name (sdb).

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Paul Zuchowski <pzuchowski@datto.com>
Closes #6939 
Closes #7261
tonyhutter pushed a commit to tonyhutter/zfs that referenced this issue Mar 12, 2018
Due to zpool create auto-partioning in Linux (i.e. sdb1),
certain utilities need to use the parition (sdb1) while
others use the whole disk name (sdb).

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Paul Zuchowski <pzuchowski@datto.com>
Closes openzfs#6939
Closes openzfs#7261
tonyhutter pushed a commit to tonyhutter/zfs that referenced this issue Mar 13, 2018
This is a squashed patchset for zfs-0.7.7.  The individual commits are
in the tonyhutter:zfs-0.7.7-hutter branch.  I squashed the commits so
that buildbot wouldn't have to run against each one, and because
github/builbot seem to have a maximum limit of 30 commits they can
test from a PR.

- Fix MMP write frequency for large pools openzfs#7205 openzfs#7289
- Handle zio_resume and mmp => off openzfs#7286
- Fix zfs-kmod builds when using rpm >= 4.14 openzfs#7284
- zdb and inuse tests don't pass with real disks openzfs#6939 openzfs#7261
- Take user namespaces into account in policy checks openzfs#6800 openzfs#7270
- Detect long config lock acquisition in mmp openzfs#7212
- Linux 4.16 compat: get_disk_and_module() openzfs#7264
- Change checksum & IO delay ratelimit values openzfs#7252
- Increment zil_itx_needcopy_bytes properly openzfs#6988 openzfs#7176
- Fix some typos openzfs#7237
- Fix zpool(8) list example to match actual format openzfs#7244
- Add SMART self-test results to zpool status -c openzfs#7178
- Add scrub after resilver zed script openzfs#4662 openzfs#7086
- Fix free memory calculation on v3.14+ openzfs#7170
- Report duration and error in mmp_history entries openzfs#7190
- Do not initiate MMP writes while pool is suspended openzfs#7182
- Linux 4.16 compat: use correct *_dec_and_test()
- Allow modprobe to fail when called within systemd openzfs#7174
- Add SMART attributes for SSD and NVMe openzfs#7183 openzfs#7193
- Correct count_uberblocks in mmp.kshlib openzfs#7191
- Fix config issues: frame size and headers openzfs#7169
- Clarify zinject(8) explanation of -e openzfs#7172
- OpenZFS 8857 - zio_remove_child() panic due to already destroyed parent zio openzfs#7168
- 'zfs receive' fails with "dataset is busy" openzfs#7129 openzfs#7154
- contrib/initramfs: add missing conf.d/zfs openzfs#7158
- mmp should use a fixed tag for spa_config locks openzfs#6530 openzfs#7155
- Handle zap_add() failures in mixed case mode openzfs#7011 openzfs#7054
- Fix zdb -ed on objset for exported pool openzfs#7099 openzfs#6464
- Fix zdb -E segfault openzfs#7099
- Fix zdb -R decompression openzfs#7099 openzfs#4984
- Fix racy assignment of zcb.zcb_haderrors openzfs#7099
- Fix zle_decompress out of bound access openzfs#7099
- Fix zdb -c traverse stop on damaged objset root openzfs#7099
- Linux 4.11 compat: avoid refcount_t name conflict openzfs#7148
- Linux 4.16 compat: inode_set_iversion() openzfs#7148
- OpenZFS 8966 - Source file zfs_acl.c, function zfs_aclset_common contains a use after end of the lifetime of a local variable openzfs#7141
- Remove deprecated zfs_arc_p_aggressive_disable openzfs#7135
- Fix default libdir for Debian/Ubuntu openzfs#7083 openzfs#7101
- Bug fix in qat_compress.c for vmalloc addr check openzfs#7125
- Fix systemd_ RPM macros usage on Debian-based distributions openzfs#7074 openzfs#7100
- Emit an error message before MMP suspends pool openzfs#7048
- ZTS: Fix create-o_ashift test case openzfs#6924 openzfs#6977
- Fix --with-systemd on Debian-based distributions (openzfs#6963) openzfs#6591 openzfs#6963
- Remove vn_rename and vn_remove dependency openzfs/spl#648 openzfs#6753
- Add support for "--enable-code-coverage" option openzfs#6670
- Make "-fno-inline" compile option more accessible openzfs#6605
- Add configure option to enable gcov analysis openzfs#6642
- Implement --enable-debuginfo to force debuginfo openzfs#2734
- Make --enable-debug fail when given bogus args openzfs#2734

Signed-off-by: Tony Hutter <hutter2@llnl.gov>
Requires-spl: refs/pull/690/head
tonyhutter pushed a commit to tonyhutter/zfs that referenced this issue Mar 13, 2018
This is a squashed patchset for zfs-0.7.7.  The individual commits are
in the tonyhutter:zfs-0.7.7-hutter branch.  I squashed the commits so
that buildbot wouldn't have to run against each one, and because
github/builbot seem to have a maximum limit of 30 commits they can
test from a PR.

- Fix MMP write frequency for large pools openzfs#7205 openzfs#7289
- Handle zio_resume and mmp => off openzfs#7286
- Fix zfs-kmod builds when using rpm >= 4.14 openzfs#7284
- zdb and inuse tests don't pass with real disks openzfs#6939 openzfs#7261
- Take user namespaces into account in policy checks openzfs#6800 openzfs#7270
- Detect long config lock acquisition in mmp openzfs#7212
- Linux 4.16 compat: get_disk_and_module() openzfs#7264
- Change checksum & IO delay ratelimit values openzfs#7252
- Increment zil_itx_needcopy_bytes properly openzfs#6988 openzfs#7176
- Fix some typos openzfs#7237
- Fix zpool(8) list example to match actual format openzfs#7244
- Add SMART self-test results to zpool status -c openzfs#7178
- Add scrub after resilver zed script openzfs#4662 openzfs#7086
- Fix free memory calculation on v3.14+ openzfs#7170
- Report duration and error in mmp_history entries openzfs#7190
- Do not initiate MMP writes while pool is suspended openzfs#7182
- Linux 4.16 compat: use correct *_dec_and_test()
- Allow modprobe to fail when called within systemd openzfs#7174
- Add SMART attributes for SSD and NVMe openzfs#7183 openzfs#7193
- Correct count_uberblocks in mmp.kshlib openzfs#7191
- Fix config issues: frame size and headers openzfs#7169
- Clarify zinject(8) explanation of -e openzfs#7172
- OpenZFS 8857 - zio_remove_child() panic due to already destroyed
  parent zio openzfs#7168
- 'zfs receive' fails with "dataset is busy" openzfs#7129 openzfs#7154
- contrib/initramfs: add missing conf.d/zfs openzfs#7158
- mmp should use a fixed tag for spa_config locks openzfs#6530 openzfs#7155
- Handle zap_add() failures in mixed case mode openzfs#7011 openzfs#7054
- Fix zdb -ed on objset for exported pool openzfs#7099 openzfs#6464
- Fix zdb -E segfault openzfs#7099
- Fix zdb -R decompression openzfs#7099 openzfs#4984
- Fix racy assignment of zcb.zcb_haderrors openzfs#7099
- Fix zle_decompress out of bound access openzfs#7099
- Fix zdb -c traverse stop on damaged objset root openzfs#7099
- Linux 4.11 compat: avoid refcount_t name conflict openzfs#7148
- Linux 4.16 compat: inode_set_iversion() openzfs#7148
- OpenZFS 8966 - Source file zfs_acl.c, function zfs_aclset_common
  contains a use after end of the lifetime of a local variable openzfs#7141
- Remove deprecated zfs_arc_p_aggressive_disable openzfs#7135
- Fix default libdir for Debian/Ubuntu openzfs#7083 openzfs#7101
- Bug fix in qat_compress.c for vmalloc addr check openzfs#7125
- Fix systemd_ RPM macros usage on Debian-based distributions openzfs#7074
  openzfs#7100
- Emit an error message before MMP suspends pool openzfs#7048
- ZTS: Fix create-o_ashift test case openzfs#6924 openzfs#6977
- Fix --with-systemd on Debian-based distributions (openzfs#6963) openzfs#6591 openzfs#6963
- Remove vn_rename and vn_remove dependency openzfs/spl#648 openzfs#6753
- Add support for "--enable-code-coverage" option openzfs#6670
- Make "-fno-inline" compile option more accessible openzfs#6605
- Add configure option to enable gcov analysis openzfs#6642
- Implement --enable-debuginfo to force debuginfo openzfs#2734
- Make --enable-debug fail when given bogus args openzfs#2734

Signed-off-by: Tony Hutter <hutter2@llnl.gov>
Requires-spl: refs/pull/690/head
tonyhutter pushed a commit to tonyhutter/zfs that referenced this issue Mar 13, 2018
Due to zpool create auto-partioning in Linux (i.e. sdb1),
certain utilities need to use the parition (sdb1) while
others use the whole disk name (sdb).

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Paul Zuchowski <pzuchowski@datto.com>
Closes openzfs#6939
Closes openzfs#7261
tonyhutter pushed a commit to tonyhutter/zfs that referenced this issue Mar 13, 2018
This is a squashed patchset for zfs-0.7.7.  The individual commits are
in the tonyhutter:zfs-0.7.7-hutter branch.  I squashed the commits so
that buildbot wouldn't have to run against each one, and because
github/builbot seem to have a maximum limit of 30 commits they can
test from a PR.

- Fix MMP write frequency for large pools openzfs#7205 openzfs#7289
- Handle zio_resume and mmp => off openzfs#7286
- Fix zfs-kmod builds when using rpm >= 4.14 openzfs#7284
- zdb and inuse tests don't pass with real disks openzfs#6939 openzfs#7261
- Take user namespaces into account in policy checks openzfs#6800 openzfs#7270
- Detect long config lock acquisition in mmp openzfs#7212
- Linux 4.16 compat: get_disk_and_module() openzfs#7264
- Change checksum & IO delay ratelimit values openzfs#7252
- Increment zil_itx_needcopy_bytes properly openzfs#6988 openzfs#7176
- Fix some typos openzfs#7237
- Fix zpool(8) list example to match actual format openzfs#7244
- Add SMART self-test results to zpool status -c openzfs#7178
- Add scrub after resilver zed script openzfs#4662 openzfs#7086
- Fix free memory calculation on v3.14+ openzfs#7170
- Report duration and error in mmp_history entries openzfs#7190
- Do not initiate MMP writes while pool is suspended openzfs#7182
- Linux 4.16 compat: use correct *_dec_and_test()
- Allow modprobe to fail when called within systemd openzfs#7174
- Add SMART attributes for SSD and NVMe openzfs#7183 openzfs#7193
- Correct count_uberblocks in mmp.kshlib openzfs#7191
- Fix config issues: frame size and headers openzfs#7169
- Clarify zinject(8) explanation of -e openzfs#7172
- OpenZFS 8857 - zio_remove_child() panic due to already destroyed
  parent zio openzfs#7168
- 'zfs receive' fails with "dataset is busy" openzfs#7129 openzfs#7154
- contrib/initramfs: add missing conf.d/zfs openzfs#7158
- mmp should use a fixed tag for spa_config locks openzfs#6530 openzfs#7155
- Handle zap_add() failures in mixed case mode openzfs#7011 openzfs#7054
- Fix zdb -ed on objset for exported pool openzfs#7099 openzfs#6464
- Fix zdb -E segfault openzfs#7099
- Fix zdb -R decompression openzfs#7099 openzfs#4984
- Fix racy assignment of zcb.zcb_haderrors openzfs#7099
- Fix zle_decompress out of bound access openzfs#7099
- Fix zdb -c traverse stop on damaged objset root openzfs#7099
- Linux 4.11 compat: avoid refcount_t name conflict openzfs#7148
- Linux 4.16 compat: inode_set_iversion() openzfs#7148
- OpenZFS 8966 - Source file zfs_acl.c, function zfs_aclset_common
  contains a use after end of the lifetime of a local variable openzfs#7141
- Remove deprecated zfs_arc_p_aggressive_disable openzfs#7135
- Fix default libdir for Debian/Ubuntu openzfs#7083 openzfs#7101
- Bug fix in qat_compress.c for vmalloc addr check openzfs#7125
- Fix systemd_ RPM macros usage on Debian-based distributions openzfs#7074
  openzfs#7100
- Emit an error message before MMP suspends pool openzfs#7048
- ZTS: Fix create-o_ashift test case openzfs#6924 openzfs#6977
- Fix --with-systemd on Debian-based distributions (openzfs#6963) openzfs#6591 openzfs#6963
- Remove vn_rename and vn_remove dependency openzfs/spl#648 openzfs#6753
- Fix "--enable-code-coverage" debug build openzfs#6674
- Update codecov.yml openzfs#6669
- Add support for "--enable-code-coverage" option openzfs#6670
- Make "-fno-inline" compile option more accessible openzfs#6605
- Add configure option to enable gcov analysis openzfs#6642
- Implement --enable-debuginfo to force debuginfo openzfs#2734
- Make --enable-debug fail when given bogus args openzfs#2734

Signed-off-by: Tony Hutter <hutter2@llnl.gov>
Requires-spl: refs/pull/690/head
tonyhutter pushed a commit that referenced this issue Mar 19, 2018
Due to zpool create auto-partioning in Linux (i.e. sdb1),
certain utilities need to use the parition (sdb1) while
others use the whole disk name (sdb).

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Paul Zuchowski <pzuchowski@datto.com>
Closes #6939
Closes #7261
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Component: Test Suite Indicates an issue with the test framework or a test case
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants