Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

zpool export kernel panic VERIFY3(tx->tx_txg <= spa_final_dirty_txg(os->os_spa)) failed (18339180 <= 18339179) #13048

Closed
stuartthebruce opened this issue Jan 30, 2022 · 8 comments
Labels
Type: Defect Incorrect behavior (e.g. crash, hang)

Comments

@stuartthebruce
Copy link

System information

Type Version/Name
Distribution Name Scientific Linux
Distribution Version 7.9
Kernel Version 3.10.0-1160.53.1.el7
Architecture x86_64
OpenZFS Version 2.1.2

Describe the problem you're observing

Kernel panic while exporting a pool requiring a system reset to recover. After reboot command succeeded.

Describe how to reproduce the problem

[root@origin-temp ~]# zpool export zfs-backup

Message from syslogd@origin-temp at Jan 30 11:42:07 ...
 kernel:VERIFY3(tx->tx_txg <= spa_final_dirty_txg(os->os_spa)) failed (18339180 <= 18339179)

Message from syslogd@origin-temp at Jan 30 11:42:07 ...
 kernel:PANIC at dbuf.c:2191:dbuf_dirty()

Include any warning/errors/backtraces from the system logs

Jan 30 11:42:05 origin-temp zed: eid=35 class=pool_export pool='zfs-backup' pool_state=EXPORTED
Jan 30 11:42:07 origin-temp kernel: VERIFY3(tx->tx_txg <= spa_final_dirty_txg(os->os_spa)) failed (18339180 <= 18339179)
Jan 30 11:42:07 origin-temp kernel: PANIC at dbuf.c:2191:dbuf_dirty()
Jan 30 11:42:07 origin-temp kernel: Showing stack for process 110154
Jan 30 11:42:07 origin-temp kernel: CPU: 16 PID: 110154 Comm: txg_sync Kdump: loaded Tainted: P           OE  ------------   3.
10.0-1160.53.1.el7.x86_64 #1
Jan 30 11:42:07 origin-temp kernel: Hardware name: Supermicro SYS-2029U-TN24R4T/X11DPU, BIOS 3.5a 08/20/2021
Jan 30 11:42:07 origin-temp kernel: Call Trace:
Jan 30 11:42:07 origin-temp kernel: [<ffffffff86583579>] dump_stack+0x19/0x1b
Jan 30 11:42:07 origin-temp kernel: [<ffffffffc132dc5b>] spl_dumpstack+0x2b/0x30 [spl]
Jan 30 11:42:07 origin-temp kernel: [<ffffffffc132dd29>] spl_panic+0xc9/0x110 [spl]
Jan 30 11:42:07 origin-temp kernel: [<ffffffffc18cf644>] ? arc_buf_access+0x254/0x280 [zfs]
Jan 30 11:42:07 origin-temp kernel: [<ffffffffc18e1af4>] ? dbuf_read+0x414/0x5b0 [zfs]
Jan 30 11:42:07 origin-temp kernel: [<ffffffffc18e3415>] dbuf_dirty+0x855/0x860 [zfs]
Jan 30 11:42:07 origin-temp kernel: [<ffffffffc18e3be2>] dmu_buf_will_dirty_impl+0xc2/0x170 [zfs]
Jan 30 11:42:07 origin-temp kernel: [<ffffffffc18e3ca6>] dmu_buf_will_dirty+0x16/0x20 [zfs]
Jan 30 11:42:07 origin-temp kernel: [<ffffffffc196d44d>] spa_history_log_sync+0xdd/0x810 [zfs]
Jan 30 11:42:07 origin-temp kernel: [<ffffffffc132f6cf>] ? spl_kmem_cache_free+0x19f/0x210 [spl]
Jan 30 11:42:07 origin-temp kernel: [<ffffffff865875f2>] ? mutex_lock+0x12/0x2f
Jan 30 11:42:07 origin-temp kernel: [<ffffffffc193bbe8>] dsl_sync_task_sync+0xf8/0x100 [zfs]
Jan 30 11:42:07 origin-temp kernel: [<ffffffffc192f3c3>] dsl_pool_sync+0x3a3/0x530 [zfs]
Jan 30 11:42:07 origin-temp kernel: [<ffffffffc195dc53>] spa_sync+0x5b3/0x10f0 [zfs]
Jan 30 11:42:07 origin-temp kernel: [<ffffffffc19757fb>] ? spa_txg_history_init_io+0x10b/0x120 [zfs]
Jan 30 11:42:07 origin-temp kernel: [<ffffffffc19797cf>] txg_sync_thread+0x2cf/0x4c0 [zfs]
Jan 30 11:42:07 origin-temp kernel: [<ffffffffc1979500>] ? txg_init+0x2b0/0x2b0 [zfs]
Jan 30 11:42:07 origin-temp kernel: [<ffffffffc1334523>] thread_generic_wrapper+0x73/0x80 [spl]
Jan 30 11:42:07 origin-temp kernel: [<ffffffffc13344b0>] ? __thread_exit+0x20/0x20 [spl]
Jan 30 11:42:07 origin-temp kernel: [<ffffffff85ec5e61>] kthread+0xd1/0xe0
Jan 30 11:42:07 origin-temp kernel: [<ffffffff85ec5d90>] ? insert_kthread_work+0x40/0x40
Jan 30 11:42:07 origin-temp kernel: [<ffffffff86595ddd>] ret_from_fork_nospec_begin+0x7/0x21
Jan 30 11:42:07 origin-temp kernel: [<ffffffff85ec5d90>] ? insert_kthread_work+0x40/0x40
@stuartthebruce stuartthebruce added the Type: Defect Incorrect behavior (e.g. crash, hang) label Jan 30, 2022
@gamanakis
Copy link
Contributor

Possibly a duplicate of #11051, the stack seems to be the same.

@gamanakis
Copy link
Contributor

Could you post a zpool history zfs-backup?

@stuartthebruce
Copy link
Author

Here is the last bit, let me know if you want more,

[root@origin-temp ~]# zpool history zfs-backup
...
2022-01-29.08:19:58 zfs rollback -R zfs-backup/cit/gwosc/big@autosnap_2022-01-28_14:15:21_hourly
2022-01-29.08:19:58 zfs rollback -R zfs-backup/cit/gwosc/fast@autosnap_2022-01-28_14:15:21_hourly
2022-01-29.08:20:01 zfs rollback -R zfs-backup/cit/gwosc/loscdata@autosnap_2022-01-28_14:15:21_hourly
2022-01-29.08:25:46 zfs receive -s -F zfs-backup/cit/gwosc/big
2022-01-29.08:25:46 zfs receive -s -F zfs-backup/cit/gwosc/fast
2022-01-29.08:25:49 zfs receive -s -F zfs-backup/cit/gwosc/loscdata
2022-01-29.08:30:04 zfs destroy zfs-backup/cit/gwosc/fast@autosnap_2022-01-14_00:07:21_daily
2022-01-29.08:30:11 zfs destroy zfs-backup/cit/gwosc/fast@autosnap_2022-01-25_15:00:44_hourly
2022-01-29.08:30:18 zfs destroy zfs-backup/cit/gwosc/fast@autosnap_2022-01-25_16:04:21_hourly
2022-01-29.08:30:25 zfs destroy zfs-backup/cit/gwosc/fast@autosnap_2022-01-25_17:06:21_hourly
2022-01-29.08:30:31 zfs destroy zfs-backup/cit/gwosc/fast@autosnap_2022-01-25_18:10:21_hourly
2022-01-29.08:30:37 zfs destroy zfs-backup/cit/gwosc/fast@autosnap_2022-01-25_19:13:21_hourly
2022-01-29.08:30:44 zfs destroy zfs-backup/cit/gwosc/fast@autosnap_2022-01-25_20:01:01_hourly
2022-01-29.08:30:50 zfs destroy zfs-backup/cit/gwosc/fast@autosnap_2022-01-25_21:04:21_hourly
2022-01-29.08:30:57 zfs destroy zfs-backup/cit/gwosc/fast@autosnap_2022-01-25_22:08:21_hourly
2022-01-29.08:31:03 zfs destroy zfs-backup/cit/gwosc/fast@autosnap_2022-01-25_23:12:11_hourly
2022-01-29.08:31:09 zfs destroy zfs-backup/cit/gwosc/fast@autosnap_2022-01-26_00:15:21_hourly
2022-01-29.08:31:15 zfs destroy zfs-backup/cit/gwosc/fast@autosnap_2022-01-26_01:03:21_hourly
2022-01-29.08:31:23 zfs destroy zfs-backup/cit/gwosc/fast@autosnap_2022-01-26_02:07:21_hourly
2022-01-29.08:31:29 zfs destroy zfs-backup/cit/gwosc/fast@autosnap_2022-01-26_03:10:21_hourly
2022-01-29.08:31:35 zfs destroy zfs-backup/cit/gwosc/fast@autosnap_2022-01-26_04:13:21_hourly
2022-01-29.08:31:42 zfs destroy zfs-backup/cit/gwosc/fast@autosnap_2022-01-26_05:01:02_hourly
2022-01-29.08:31:48 zfs destroy zfs-backup/cit/gwosc/fast@autosnap_2022-01-26_06:04:21_hourly
2022-01-29.08:31:55 zfs destroy zfs-backup/cit/gwosc/fast@autosnap_2022-01-26_07:08:21_hourly
2022-01-29.08:32:01 zfs destroy zfs-backup/cit/gwosc/fast@autosnap_2022-01-26_08:12:10_hourly
2022-01-29.08:32:07 zfs destroy zfs-backup/cit/gwosc/fast@autosnap_2022-01-26_09:15:21_hourly
2022-01-29.08:32:13 zfs destroy zfs-backup/cit/gwosc/fast@autosnap_2022-01-26_10:03:21_hourly
2022-01-29.08:32:20 zfs destroy zfs-backup/cit/gwosc/fast@autosnap_2022-01-26_11:07:21_hourly
2022-01-29.08:32:26 zfs destroy zfs-backup/cit/gwosc/fast@autosnap_2022-01-26_12:11:21_hourly
2022-01-29.08:32:32 zfs destroy zfs-backup/cit/gwosc/fast@autosnap_2022-01-26_13:15:21_hourly
2022-01-29.08:32:38 zfs destroy zfs-backup/cit/gwosc/fast@autosnap_2022-01-26_14:02:42_hourly
2022-01-29.08:32:45 zfs destroy zfs-backup/cit/gwosc/fast@autosnap_2022-01-26_15:06:21_hourly
2022-01-29.08:32:51 zfs destroy zfs-backup/cit/gwosc/fast@autosnap_2022-01-26_16:10:21_hourly
2022-01-29.08:32:57 zfs destroy zfs-backup/cit/gwosc/big@autosnap_2022-01-14_00:07:21_daily
2022-01-29.08:33:04 zfs destroy zfs-backup/cit/gwosc/big@autosnap_2022-01-25_15:00:44_hourly
2022-01-29.08:33:10 zfs destroy zfs-backup/cit/gwosc/big@autosnap_2022-01-25_16:04:21_hourly
2022-01-29.08:33:17 zfs destroy zfs-backup/cit/gwosc/big@autosnap_2022-01-25_17:06:21_hourly
2022-01-29.08:33:24 zfs destroy zfs-backup/cit/gwosc/big@autosnap_2022-01-25_18:10:21_hourly
2022-01-29.08:33:30 zfs destroy zfs-backup/cit/gwosc/big@autosnap_2022-01-25_19:13:21_hourly
2022-01-29.08:33:37 zfs destroy zfs-backup/cit/gwosc/big@autosnap_2022-01-25_20:01:01_hourly
2022-01-29.08:33:43 zfs destroy zfs-backup/cit/gwosc/big@autosnap_2022-01-25_21:04:21_hourly
2022-01-29.08:33:49 zfs destroy zfs-backup/cit/gwosc/big@autosnap_2022-01-25_22:08:21_hourly
2022-01-29.08:33:56 zfs destroy zfs-backup/cit/gwosc/big@autosnap_2022-01-25_23:12:11_hourly
2022-01-29.08:34:02 zfs destroy zfs-backup/cit/gwosc/big@autosnap_2022-01-26_00:15:21_hourly
2022-01-29.08:34:09 zfs destroy zfs-backup/cit/gwosc/big@autosnap_2022-01-26_01:03:21_hourly
2022-01-29.08:34:15 zfs destroy zfs-backup/cit/gwosc/big@autosnap_2022-01-26_02:07:21_hourly
2022-01-29.08:34:21 zfs destroy zfs-backup/cit/gwosc/big@autosnap_2022-01-26_03:10:21_hourly
2022-01-29.08:34:28 zfs destroy zfs-backup/cit/gwosc/big@autosnap_2022-01-26_04:13:21_hourly
2022-01-29.08:34:34 zfs destroy zfs-backup/cit/gwosc/big@autosnap_2022-01-26_05:01:01_hourly
2022-01-29.08:34:41 zfs destroy zfs-backup/cit/gwosc/big@autosnap_2022-01-26_06:04:21_hourly
2022-01-29.08:34:47 zfs destroy zfs-backup/cit/gwosc/big@autosnap_2022-01-26_07:08:21_hourly
2022-01-29.08:34:54 zfs destroy zfs-backup/cit/gwosc/big@autosnap_2022-01-26_08:12:11_hourly
2022-01-29.08:35:00 zfs destroy zfs-backup/cit/gwosc/big@autosnap_2022-01-26_09:15:21_hourly
2022-01-29.08:35:07 zfs destroy zfs-backup/cit/gwosc/big@autosnap_2022-01-26_10:03:21_hourly
2022-01-29.08:35:14 zfs destroy zfs-backup/cit/gwosc/big@autosnap_2022-01-26_11:07:21_hourly
2022-01-29.08:35:20 zfs destroy zfs-backup/cit/gwosc/big@autosnap_2022-01-26_12:11:21_hourly
2022-01-29.08:35:26 zfs destroy zfs-backup/cit/gwosc/big@autosnap_2022-01-26_13:15:21_hourly
2022-01-29.08:35:33 zfs destroy zfs-backup/cit/gwosc/big@autosnap_2022-01-26_14:02:42_hourly
2022-01-29.08:35:39 zfs destroy zfs-backup/cit/gwosc/big@autosnap_2022-01-26_15:06:21_hourly
2022-01-29.08:35:45 zfs destroy zfs-backup/cit/gwosc/big@autosnap_2022-01-26_16:10:21_hourly
2022-01-29.08:35:51 zfs destroy zfs-backup/cit/gwosc/loscdata@autosnap_2022-01-14_00:07:21_daily
2022-01-29.08:35:58 zfs destroy zfs-backup/cit/gwosc/loscdata@autosnap_2022-01-25_15:00:44_hourly
2022-01-29.08:36:05 zfs destroy zfs-backup/cit/gwosc/loscdata@autosnap_2022-01-25_16:04:21_hourly
2022-01-29.08:36:12 zfs destroy zfs-backup/cit/gwosc/loscdata@autosnap_2022-01-25_17:06:21_hourly
2022-01-29.08:36:18 zfs destroy zfs-backup/cit/gwosc/loscdata@autosnap_2022-01-25_18:10:21_hourly
2022-01-29.08:36:24 zfs destroy zfs-backup/cit/gwosc/loscdata@autosnap_2022-01-25_19:13:21_hourly
2022-01-29.08:36:31 zfs destroy zfs-backup/cit/gwosc/loscdata@autosnap_2022-01-25_20:01:01_hourly
2022-01-29.08:36:38 zfs destroy zfs-backup/cit/gwosc/loscdata@autosnap_2022-01-25_21:04:21_hourly
2022-01-29.08:36:45 zfs destroy zfs-backup/cit/gwosc/loscdata@autosnap_2022-01-25_22:08:21_hourly
2022-01-29.08:36:51 zfs destroy zfs-backup/cit/gwosc/loscdata@autosnap_2022-01-25_23:12:11_hourly
2022-01-29.08:36:57 zfs destroy zfs-backup/cit/gwosc/loscdata@autosnap_2022-01-26_00:15:21_hourly
2022-01-29.08:37:04 zfs destroy zfs-backup/cit/gwosc/loscdata@autosnap_2022-01-26_01:03:21_hourly
2022-01-29.08:37:11 zfs destroy zfs-backup/cit/gwosc/loscdata@autosnap_2022-01-26_02:07:21_hourly
2022-01-29.08:37:17 zfs destroy zfs-backup/cit/gwosc/loscdata@autosnap_2022-01-26_03:10:21_hourly
2022-01-29.08:37:23 zfs destroy zfs-backup/cit/gwosc/loscdata@autosnap_2022-01-26_04:13:21_hourly
2022-01-29.08:37:30 zfs destroy zfs-backup/cit/gwosc/loscdata@autosnap_2022-01-26_05:01:02_hourly
2022-01-29.08:37:36 zfs destroy zfs-backup/cit/gwosc/loscdata@autosnap_2022-01-26_06:04:21_hourly
2022-01-29.08:37:43 zfs destroy zfs-backup/cit/gwosc/loscdata@autosnap_2022-01-26_07:08:21_hourly
2022-01-29.08:37:49 zfs destroy zfs-backup/cit/gwosc/loscdata@autosnap_2022-01-26_08:12:10_hourly
2022-01-29.08:37:56 zfs destroy zfs-backup/cit/gwosc/loscdata@autosnap_2022-01-26_09:15:21_hourly
2022-01-29.08:38:02 zfs destroy zfs-backup/cit/gwosc/loscdata@autosnap_2022-01-26_10:03:21_hourly
2022-01-29.08:38:08 zfs destroy zfs-backup/cit/gwosc/loscdata@autosnap_2022-01-26_11:07:21_hourly
2022-01-29.08:38:15 zfs destroy zfs-backup/cit/gwosc/loscdata@autosnap_2022-01-26_12:11:21_hourly
2022-01-29.08:38:21 zfs destroy zfs-backup/cit/gwosc/loscdata@autosnap_2022-01-26_13:15:21_hourly
2022-01-29.08:38:28 zfs destroy zfs-backup/cit/gwosc/loscdata@autosnap_2022-01-26_14:02:42_hourly
2022-01-29.08:38:34 zfs destroy zfs-backup/cit/gwosc/loscdata@autosnap_2022-01-26_15:06:21_hourly
2022-01-29.08:38:40 zfs destroy zfs-backup/cit/gwosc/loscdata@autosnap_2022-01-26_16:10:21_hourly
2022-01-30.08:29:34 zfs rollback -R zfs-backup/cit/gwosc/fast@autosnap_2022-01-29_16:15:21_hourly
2022-01-30.08:29:38 zfs rollback -R zfs-backup/cit/gwosc/big@autosnap_2022-01-29_16:15:21_hourly
2022-01-30.08:29:38 zfs rollback -R zfs-backup/cit/gwosc/loscdata@autosnap_2022-01-29_16:15:21_hourly
2022-01-30.08:35:11 zfs receive -s -F zfs-backup/cit/gwosc/fast
2022-01-30.08:35:15 zfs receive -s -F zfs-backup/cit/gwosc/big
2022-01-30.08:35:15 zfs receive -s -F zfs-backup/cit/gwosc/loscdata
2022-01-30.08:45:07 zfs destroy zfs-backup/cit/gwosc/fast@autosnap_2022-01-26_17:13:21_hourly
2022-01-30.08:45:14 zfs destroy zfs-backup/cit/gwosc/fast@autosnap_2022-01-26_18:01:02_hourly
2022-01-30.08:45:20 zfs destroy zfs-backup/cit/gwosc/big@autosnap_2022-01-26_17:13:21_hourly
2022-01-30.08:45:26 zfs destroy zfs-backup/cit/gwosc/big@autosnap_2022-01-26_18:01:02_hourly
2022-01-30.08:45:32 zfs destroy zfs-backup/cit/gwosc/loscdata@autosnap_2022-01-26_17:13:21_hourly
2022-01-30.08:45:39 zfs destroy zfs-backup/cit/gwosc/loscdata@autosnap_2022-01-26_18:01:02_hourly
2022-01-30.09:15:06 zfs destroy zfs-backup/cit/gwosc/fast@autosnap_2022-01-14_23:59:21_daily
2022-01-30.09:15:13 zfs destroy zfs-backup/cit/gwosc/fast@autosnap_2022-01-26_19:04:21_hourly
2022-01-30.09:15:20 zfs destroy zfs-backup/cit/gwosc/fast@autosnap_2022-01-26_20:08:21_hourly
2022-01-30.09:15:26 zfs destroy zfs-backup/cit/gwosc/fast@autosnap_2022-01-26_21:12:10_hourly
2022-01-30.09:15:33 zfs destroy zfs-backup/cit/gwosc/fast@autosnap_2022-01-26_22:15:21_hourly
2022-01-30.09:15:39 zfs destroy zfs-backup/cit/gwosc/fast@autosnap_2022-01-26_23:03:21_hourly
2022-01-30.09:15:47 zfs destroy zfs-backup/cit/gwosc/fast@autosnap_2022-01-27_00:07:21_hourly
2022-01-30.09:15:53 zfs destroy zfs-backup/cit/gwosc/fast@autosnap_2022-01-27_01:11:21_hourly
2022-01-30.09:15:59 zfs destroy zfs-backup/cit/gwosc/fast@autosnap_2022-01-27_02:14:21_hourly
2022-01-30.09:16:06 zfs destroy zfs-backup/cit/gwosc/fast@autosnap_2022-01-27_03:02:21_hourly
2022-01-30.09:16:13 zfs destroy zfs-backup/cit/gwosc/fast@autosnap_2022-01-27_04:06:21_hourly
2022-01-30.09:16:20 zfs destroy zfs-backup/cit/gwosc/fast@autosnap_2022-01-27_05:10:21_hourly
2022-01-30.09:16:26 zfs destroy zfs-backup/cit/gwosc/fast@autosnap_2022-01-27_06:13:21_hourly
2022-01-30.09:16:33 zfs destroy zfs-backup/cit/gwosc/fast@autosnap_2022-01-27_07:00:21_hourly
2022-01-30.09:16:40 zfs destroy zfs-backup/cit/gwosc/fast@autosnap_2022-01-27_08:04:21_hourly
2022-01-30.09:16:47 zfs destroy zfs-backup/cit/gwosc/fast@autosnap_2022-01-27_09:08:21_hourly
2022-01-30.09:16:54 zfs destroy zfs-backup/cit/gwosc/fast@autosnap_2022-01-27_10:12:11_hourly
2022-01-30.09:17:00 zfs destroy zfs-backup/cit/gwosc/fast@autosnap_2022-01-27_11:15:21_hourly
2022-01-30.09:17:07 zfs destroy zfs-backup/cit/gwosc/fast@autosnap_2022-01-27_12:03:21_hourly
2022-01-30.09:17:14 zfs destroy zfs-backup/cit/gwosc/fast@autosnap_2022-01-27_13:07:21_hourly
2022-01-30.09:17:21 zfs destroy zfs-backup/cit/gwosc/fast@autosnap_2022-01-27_14:10:21_hourly
2022-01-30.09:17:27 zfs destroy zfs-backup/cit/gwosc/fast@autosnap_2022-01-27_15:13:21_hourly
2022-01-30.09:17:34 zfs destroy zfs-backup/cit/gwosc/fast@autosnap_2022-01-27_16:01:01_hourly
2022-01-30.09:17:41 zfs destroy zfs-backup/cit/gwosc/big@autosnap_2022-01-14_23:59:21_daily
2022-01-30.09:17:48 zfs destroy zfs-backup/cit/gwosc/big@autosnap_2022-01-26_19:04:21_hourly
2022-01-30.09:17:54 zfs destroy zfs-backup/cit/gwosc/big@autosnap_2022-01-26_20:08:21_hourly
2022-01-30.09:18:01 zfs destroy zfs-backup/cit/gwosc/big@autosnap_2022-01-26_21:12:10_hourly
2022-01-30.09:18:08 zfs destroy zfs-backup/cit/gwosc/big@autosnap_2022-01-26_22:15:21_hourly
2022-01-30.09:18:15 zfs destroy zfs-backup/cit/gwosc/big@autosnap_2022-01-26_23:03:21_hourly
2022-01-30.09:18:22 zfs destroy zfs-backup/cit/gwosc/big@autosnap_2022-01-27_00:07:22_hourly
2022-01-30.09:18:28 zfs destroy zfs-backup/cit/gwosc/big@autosnap_2022-01-27_01:11:21_hourly
2022-01-30.09:18:35 zfs destroy zfs-backup/cit/gwosc/big@autosnap_2022-01-27_02:14:21_hourly
2022-01-30.09:18:41 zfs destroy zfs-backup/cit/gwosc/big@autosnap_2022-01-27_03:02:21_hourly
2022-01-30.09:18:47 zfs destroy zfs-backup/cit/gwosc/big@autosnap_2022-01-27_04:06:21_hourly
2022-01-30.09:18:54 zfs destroy zfs-backup/cit/gwosc/big@autosnap_2022-01-27_05:10:21_hourly
2022-01-30.09:19:01 zfs destroy zfs-backup/cit/gwosc/big@autosnap_2022-01-27_06:13:21_hourly
2022-01-30.09:19:08 zfs destroy zfs-backup/cit/gwosc/big@autosnap_2022-01-27_07:00:21_hourly
2022-01-30.09:19:14 zfs destroy zfs-backup/cit/gwosc/big@autosnap_2022-01-27_08:04:21_hourly
2022-01-30.09:19:21 zfs destroy zfs-backup/cit/gwosc/big@autosnap_2022-01-27_09:08:21_hourly
2022-01-30.09:19:27 zfs destroy zfs-backup/cit/gwosc/big@autosnap_2022-01-27_10:12:11_hourly
2022-01-30.09:19:33 zfs destroy zfs-backup/cit/gwosc/big@autosnap_2022-01-27_11:15:21_hourly
2022-01-30.09:19:40 zfs destroy zfs-backup/cit/gwosc/big@autosnap_2022-01-27_12:03:21_hourly
2022-01-30.09:19:47 zfs destroy zfs-backup/cit/gwosc/big@autosnap_2022-01-27_13:07:21_hourly
2022-01-30.09:19:54 zfs destroy zfs-backup/cit/gwosc/big@autosnap_2022-01-27_14:10:21_hourly
2022-01-30.09:20:01 zfs destroy zfs-backup/cit/gwosc/big@autosnap_2022-01-27_15:13:21_hourly
2022-01-30.09:20:07 zfs destroy zfs-backup/cit/gwosc/big@autosnap_2022-01-27_16:01:01_hourly
2022-01-30.09:20:14 zfs destroy zfs-backup/cit/gwosc/loscdata@autosnap_2022-01-14_23:59:21_daily
2022-01-30.09:20:21 zfs destroy zfs-backup/cit/gwosc/loscdata@autosnap_2022-01-26_19:04:21_hourly
2022-01-30.09:20:27 zfs destroy zfs-backup/cit/gwosc/loscdata@autosnap_2022-01-26_20:08:21_hourly
2022-01-30.09:20:34 zfs destroy zfs-backup/cit/gwosc/loscdata@autosnap_2022-01-26_21:12:10_hourly
2022-01-30.09:20:40 zfs destroy zfs-backup/cit/gwosc/loscdata@autosnap_2022-01-26_22:15:21_hourly
2022-01-30.09:20:47 zfs destroy zfs-backup/cit/gwosc/loscdata@autosnap_2022-01-26_23:03:21_hourly
2022-01-30.09:20:54 zfs destroy zfs-backup/cit/gwosc/loscdata@autosnap_2022-01-27_00:07:21_hourly
2022-01-30.09:21:01 zfs destroy zfs-backup/cit/gwosc/loscdata@autosnap_2022-01-27_01:11:21_hourly
2022-01-30.09:21:08 zfs destroy zfs-backup/cit/gwosc/loscdata@autosnap_2022-01-27_02:14:21_hourly
2022-01-30.09:21:15 zfs destroy zfs-backup/cit/gwosc/loscdata@autosnap_2022-01-27_03:02:21_hourly
2022-01-30.09:21:22 zfs destroy zfs-backup/cit/gwosc/loscdata@autosnap_2022-01-27_04:06:21_hourly
2022-01-30.09:21:29 zfs destroy zfs-backup/cit/gwosc/loscdata@autosnap_2022-01-27_05:10:21_hourly
2022-01-30.09:21:36 zfs destroy zfs-backup/cit/gwosc/loscdata@autosnap_2022-01-27_06:13:21_hourly
2022-01-30.09:21:43 zfs destroy zfs-backup/cit/gwosc/loscdata@autosnap_2022-01-27_07:00:21_hourly
2022-01-30.09:21:50 zfs destroy zfs-backup/cit/gwosc/loscdata@autosnap_2022-01-27_08:04:21_hourly
2022-01-30.09:21:57 zfs destroy zfs-backup/cit/gwosc/loscdata@autosnap_2022-01-27_09:08:21_hourly
2022-01-30.09:22:04 zfs destroy zfs-backup/cit/gwosc/loscdata@autosnap_2022-01-27_10:12:11_hourly
2022-01-30.09:22:11 zfs destroy zfs-backup/cit/gwosc/loscdata@autosnap_2022-01-27_11:15:21_hourly
2022-01-30.09:22:18 zfs destroy zfs-backup/cit/gwosc/loscdata@autosnap_2022-01-27_12:03:21_hourly
2022-01-30.09:22:25 zfs destroy zfs-backup/cit/gwosc/loscdata@autosnap_2022-01-27_13:07:21_hourly
2022-01-30.09:22:33 zfs destroy zfs-backup/cit/gwosc/loscdata@autosnap_2022-01-27_14:10:21_hourly
2022-01-30.09:22:40 zfs destroy zfs-backup/cit/gwosc/loscdata@autosnap_2022-01-27_15:13:21_hourly
2022-01-30.09:22:46 zfs destroy zfs-backup/cit/gwosc/loscdata@autosnap_2022-01-27_16:01:01_hourly
2022-01-30.11:06:18 zpool upgrade zfs-backup
2022-01-30.11:08:14 zpool export zfs-backup
2022-01-30.11:08:53 zpool import zfs-backup
2022-01-30.11:09:06 zpool export zfs-backup
2022-01-30.11:09:32 zpool import zfs-backup -d /dev/disk/by-id/
2022-01-30.11:09:53 zpool export zfs-backup
2022-01-30.11:21:17 zpool import zfs-backup
2022-01-30.11:22:53 zpool export zfs-backup
2022-01-30.11:40:34 zpool import zfs-backup -d /dev/disk/by-partlabel
2022-01-30.11:40:45 zpool export zfs-backup
2022-01-30.11:41:06 zpool import zfs-backup -d /dev/disk/by-id
2022-01-30.11:41:34 zpool export zfs-backup
2022-01-30.11:41:57 zpool import zfs-backup
2022-01-30.11:42:05 zpool export zfs-backup
2022-01-30.11:55:55 zpool import -c /etc/zfs/zpool.cache -aN
2022-01-30.11:57:20 zpool export zfs-backup
2022-01-30.11:59:20 zpool import zfs-backup -d /dev/disk/by-id
2022-01-30.11:59:40 zpool scrub zfs-backup
2022-01-31.07:59:27 zfs rollback -R zfs-backup/cit/gwosc/fast@autosnap_2022-01-30_16:08:21_hourly
2022-01-31.07:59:29 zfs rollback -R zfs-backup/cit/gwosc/big@autosnap_2022-01-30_16:08:21_hourly
2022-01-31.07:59:31 zfs rollback -R zfs-backup/cit/gwosc/loscdata@autosnap_2022-01-30_16:08:21_hourly
2022-01-31.08:01:11 zfs receive -s -F zfs-backup/cit/gwosc/fast
2022-01-31.08:01:13 zfs receive -s -F zfs-backup/cit/gwosc/big
2022-01-31.08:01:16 zfs receive -s -F zfs-backup/cit/gwosc/loscdata
2022-01-31.08:15:02 zfs destroy zfs-backup/cit/gwosc/fast@autosnap_2022-01-16_00:03:21_daily
2022-01-31.08:15:04 zfs destroy zfs-backup/cit/gwosc/fast@autosnap_2022-01-27_17:04:21_hourly
2022-01-31.08:15:06 zfs destroy zfs-backup/cit/gwosc/fast@autosnap_2022-01-27_18:08:21_hourly
2022-01-31.08:15:09 zfs destroy zfs-backup/cit/gwosc/fast@autosnap_2022-01-27_19:11:21_hourly
2022-01-31.08:15:11 zfs destroy zfs-backup/cit/gwosc/fast@autosnap_2022-01-27_20:15:21_hourly
2022-01-31.08:15:13 zfs destroy zfs-backup/cit/gwosc/fast@autosnap_2022-01-27_21:03:21_hourly
2022-01-31.08:15:15 zfs destroy zfs-backup/cit/gwosc/fast@autosnap_2022-01-27_22:07:21_hourly
2022-01-31.08:15:17 zfs destroy zfs-backup/cit/gwosc/fast@autosnap_2022-01-27_23:11:21_hourly
2022-01-31.08:15:19 zfs destroy zfs-backup/cit/gwosc/fast@autosnap_2022-01-28_00:15:21_hourly
2022-01-31.08:15:21 zfs destroy zfs-backup/cit/gwosc/fast@autosnap_2022-01-28_01:02:21_hourly
2022-01-31.08:15:23 zfs destroy zfs-backup/cit/gwosc/fast@autosnap_2022-01-28_02:06:21_hourly
2022-01-31.08:15:26 zfs destroy zfs-backup/cit/gwosc/fast@autosnap_2022-01-28_03:09:36_hourly
2022-01-31.08:15:28 zfs destroy zfs-backup/cit/gwosc/fast@autosnap_2022-01-28_04:13:21_hourly
2022-01-31.08:15:30 zfs destroy zfs-backup/cit/gwosc/fast@autosnap_2022-01-28_05:01:01_hourly
2022-01-31.08:15:32 zfs destroy zfs-backup/cit/gwosc/fast@autosnap_2022-01-28_06:04:10_hourly
2022-01-31.08:15:34 zfs destroy zfs-backup/cit/gwosc/fast@autosnap_2022-01-28_07:07:21_hourly
2022-01-31.08:15:36 zfs destroy zfs-backup/cit/gwosc/fast@autosnap_2022-01-28_08:10:21_hourly
2022-01-31.08:15:38 zfs destroy zfs-backup/cit/gwosc/fast@autosnap_2022-01-28_09:13:21_hourly
2022-01-31.08:15:40 zfs destroy zfs-backup/cit/gwosc/fast@autosnap_2022-01-28_10:01:02_hourly
2022-01-31.08:15:42 zfs destroy zfs-backup/cit/gwosc/fast@autosnap_2022-01-28_11:04:21_hourly
2022-01-31.08:15:44 zfs destroy zfs-backup/cit/gwosc/fast@autosnap_2022-01-28_12:08:21_hourly
2022-01-31.08:15:46 zfs destroy zfs-backup/cit/gwosc/fast@autosnap_2022-01-28_13:12:10_hourly
2022-01-31.08:15:49 zfs destroy zfs-backup/cit/gwosc/fast@autosnap_2022-01-28_14:15:21_hourly
2022-01-31.08:15:51 zfs destroy zfs-backup/cit/gwosc/fast@autosnap_2022-01-28_15:03:21_hourly
2022-01-31.08:15:53 zfs destroy zfs-backup/cit/gwosc/big@autosnap_2022-01-16_00:03:21_daily
2022-01-31.08:15:55 zfs destroy zfs-backup/cit/gwosc/big@autosnap_2022-01-27_17:04:21_hourly
2022-01-31.08:15:57 zfs destroy zfs-backup/cit/gwosc/big@autosnap_2022-01-27_18:08:21_hourly
2022-01-31.08:16:00 zfs destroy zfs-backup/cit/gwosc/big@autosnap_2022-01-27_19:11:21_hourly
2022-01-31.08:16:02 zfs destroy zfs-backup/cit/gwosc/big@autosnap_2022-01-27_20:15:21_hourly
2022-01-31.08:16:04 zfs destroy zfs-backup/cit/gwosc/big@autosnap_2022-01-27_21:03:21_hourly
2022-01-31.08:16:06 zfs destroy zfs-backup/cit/gwosc/big@autosnap_2022-01-27_22:07:21_hourly
2022-01-31.08:16:08 zfs destroy zfs-backup/cit/gwosc/big@autosnap_2022-01-27_23:11:21_hourly
2022-01-31.08:16:10 zfs destroy zfs-backup/cit/gwosc/big@autosnap_2022-01-28_00:15:21_hourly
2022-01-31.08:16:12 zfs destroy zfs-backup/cit/gwosc/big@autosnap_2022-01-28_01:02:21_hourly
2022-01-31.08:16:14 zfs destroy zfs-backup/cit/gwosc/big@autosnap_2022-01-28_02:06:21_hourly
2022-01-31.08:16:16 zfs destroy zfs-backup/cit/gwosc/big@autosnap_2022-01-28_03:09:36_hourly
2022-01-31.08:16:19 zfs destroy zfs-backup/cit/gwosc/big@autosnap_2022-01-28_04:13:21_hourly
2022-01-31.08:16:21 zfs destroy zfs-backup/cit/gwosc/big@autosnap_2022-01-28_05:01:02_hourly
2022-01-31.08:16:23 zfs destroy zfs-backup/cit/gwosc/big@autosnap_2022-01-28_06:04:10_hourly
2022-01-31.08:16:25 zfs destroy zfs-backup/cit/gwosc/big@autosnap_2022-01-28_07:07:21_hourly
2022-01-31.08:16:27 zfs destroy zfs-backup/cit/gwosc/big@autosnap_2022-01-28_08:10:21_hourly
2022-01-31.08:16:29 zfs destroy zfs-backup/cit/gwosc/big@autosnap_2022-01-28_09:13:21_hourly
2022-01-31.08:16:31 zfs destroy zfs-backup/cit/gwosc/big@autosnap_2022-01-28_10:01:02_hourly
2022-01-31.08:16:33 zfs destroy zfs-backup/cit/gwosc/big@autosnap_2022-01-28_11:04:21_hourly
2022-01-31.08:16:35 zfs destroy zfs-backup/cit/gwosc/big@autosnap_2022-01-28_12:08:21_hourly
2022-01-31.08:16:37 zfs destroy zfs-backup/cit/gwosc/big@autosnap_2022-01-28_13:12:11_hourly
2022-01-31.08:16:39 zfs destroy zfs-backup/cit/gwosc/big@autosnap_2022-01-28_14:15:21_hourly
2022-01-31.08:16:41 zfs destroy zfs-backup/cit/gwosc/big@autosnap_2022-01-28_15:03:21_hourly
2022-01-31.08:16:44 zfs destroy zfs-backup/cit/gwosc/loscdata@autosnap_2022-01-16_00:03:21_daily
2022-01-31.08:16:46 zfs destroy zfs-backup/cit/gwosc/loscdata@autosnap_2022-01-27_17:04:21_hourly
2022-01-31.08:16:48 zfs destroy zfs-backup/cit/gwosc/loscdata@autosnap_2022-01-27_18:08:21_hourly
2022-01-31.08:16:50 zfs destroy zfs-backup/cit/gwosc/loscdata@autosnap_2022-01-27_19:11:21_hourly
2022-01-31.08:16:52 zfs destroy zfs-backup/cit/gwosc/loscdata@autosnap_2022-01-27_20:15:21_hourly
2022-01-31.08:16:55 zfs destroy zfs-backup/cit/gwosc/loscdata@autosnap_2022-01-27_21:03:21_hourly
2022-01-31.08:16:57 zfs destroy zfs-backup/cit/gwosc/loscdata@autosnap_2022-01-27_22:07:21_hourly
2022-01-31.08:16:59 zfs destroy zfs-backup/cit/gwosc/loscdata@autosnap_2022-01-27_23:11:21_hourly
2022-01-31.08:17:01 zfs destroy zfs-backup/cit/gwosc/loscdata@autosnap_2022-01-28_00:15:21_hourly
2022-01-31.08:17:03 zfs destroy zfs-backup/cit/gwosc/loscdata@autosnap_2022-01-28_01:02:21_hourly
2022-01-31.08:17:05 zfs destroy zfs-backup/cit/gwosc/loscdata@autosnap_2022-01-28_02:06:21_hourly
2022-01-31.08:17:07 zfs destroy zfs-backup/cit/gwosc/loscdata@autosnap_2022-01-28_03:09:36_hourly
2022-01-31.08:17:09 zfs destroy zfs-backup/cit/gwosc/loscdata@autosnap_2022-01-28_04:13:21_hourly
2022-01-31.08:17:12 zfs destroy zfs-backup/cit/gwosc/loscdata@autosnap_2022-01-28_05:01:01_hourly
2022-01-31.08:17:14 zfs destroy zfs-backup/cit/gwosc/loscdata@autosnap_2022-01-28_06:04:10_hourly
2022-01-31.08:17:16 zfs destroy zfs-backup/cit/gwosc/loscdata@autosnap_2022-01-28_07:07:21_hourly
2022-01-31.08:17:19 zfs destroy zfs-backup/cit/gwosc/loscdata@autosnap_2022-01-28_08:10:21_hourly
2022-01-31.08:17:21 zfs destroy zfs-backup/cit/gwosc/loscdata@autosnap_2022-01-28_09:13:21_hourly
2022-01-31.08:17:23 zfs destroy zfs-backup/cit/gwosc/loscdata@autosnap_2022-01-28_10:01:02_hourly
2022-01-31.08:17:26 zfs destroy zfs-backup/cit/gwosc/loscdata@autosnap_2022-01-28_11:04:21_hourly
2022-01-31.08:17:28 zfs destroy zfs-backup/cit/gwosc/loscdata@autosnap_2022-01-28_12:08:21_hourly
2022-01-31.08:17:30 zfs destroy zfs-backup/cit/gwosc/loscdata@autosnap_2022-01-28_13:12:11_hourly
2022-01-31.08:17:32 zfs destroy zfs-backup/cit/gwosc/loscdata@autosnap_2022-01-28_14:15:21_hourly
2022-01-31.08:17:34 zfs destroy zfs-backup/cit/gwosc/loscdata@autosnap_2022-01-28_15:03:21_hourly

@gamanakis
Copy link
Contributor

2022-01-30.11:08:14 zpool export zfs-backup
2022-01-30.11:08:53 zpool import zfs-backup
2022-01-30.11:09:06 zpool export zfs-backup
2022-01-30.11:09:32 zpool import zfs-backup -d /dev/disk/by-id/
2022-01-30.11:09:53 zpool export zfs-backup
2022-01-30.11:21:17 zpool import zfs-backup
2022-01-30.11:22:53 zpool export zfs-backup
2022-01-30.11:40:34 zpool import zfs-backup -d /dev/disk/by-partlabel
2022-01-30.11:40:45 zpool export zfs-backup
2022-01-30.11:41:06 zpool import zfs-backup -d /dev/disk/by-id
2022-01-30.11:41:34 zpool export zfs-backup
2022-01-30.11:41:57 zpool import zfs-backup
2022-01-30.11:42:05 zpool export zfs-backup

That last export triggered the panic (based on the timestamps). Were you doing those imports/exports manually or through a script? Also did you remember if you did anything else not logged in the history of the pool right before the last export?

@stuartthebruce
Copy link
Author

That last export triggered the panic (based on the timestamps). Were you doing those imports/exports manually or through a script?

manually.

Also did you remember if you did anything else not logged in the history of the pool right before the last export?

The pool was idle and I was fiddling with what device names where displayed in the output of zpool status after the following history item completed,

2022-01-25.12:10:00 zpool replace zfs-backup 15589637582131444444 sdai

@gamanakis
Copy link
Contributor

gamanakis commented Feb 9, 2022

I think this bug may be due to SPA_ASYNC_CONFIG_UPDATE being called at some point between setting spa->spa_final_txg and calling spa_unload()->spa_async_suspend() and txg_sync_stop() in spa_export_common().

Notably when SPA_ASYNC_CONFIG_UPDATE is set, spa_async_thread() may call spa_history_log_internal() in the above time frame, which will then trigger the panic.

Edit: actually spa_export_common() calls spa_async_suspend() which stops all async tasks close to the beginning, so the above hypothesis is probably wrong.

@gamanakis
Copy link
Contributor

gamanakis commented Feb 12, 2022

@stuartthebruce Do you have "autotrim=on" for that pool (zpool get autotrim poolname)?

@stuartthebruce
Copy link
Author

@stuartthebruce Do you have "autotrim=on" for that pool (zpool get autotrim poolname)?

I didn't,

[root@origin-temp ~]# zpool get autotrim zfs-backup
NAME        PROPERTY  VALUE     SOURCE
zfs-backup  autotrim  off       default

but I do now,

[root@origin-temp ~]# zpool set autotrim=on zfs-backup

[root@origin-temp ~]# zpool get autotrim zfs-backup
NAME        PROPERTY  VALUE     SOURCE
zfs-backup  autotrim  on        local

tonyhutter pushed a commit to tonyhutter/zfs that referenced this issue Feb 24, 2022
There are two codepaths than can dirty final TXGs:

1) If calling spa_export_common()->spa_unload()->
   spa_unload_log_sm_flush_all() after the spa_final_txg is set, then
   spa_sync()->spa_flush_metaslabs() may end up dirtying the final
   TXGs. Then we have the following panic:
   Call Trace:
    <TASK>
    dump_stack_lvl+0x46/0x62
    spl_panic+0xea/0x102 [spl]
    dbuf_dirty+0xcd6/0x11b0 [zfs]
    zap_lockdir_impl+0x321/0x590 [zfs]
    zap_lockdir+0xed/0x150 [zfs]
    zap_update+0x69/0x250 [zfs]
    feature_sync+0x5f/0x190 [zfs]
    space_map_alloc+0x83/0xc0 [zfs]
    spa_generate_syncing_log_sm+0x10b/0x2f0 [zfs]
    spa_flush_metaslabs+0xb2/0x350 [zfs]
    spa_sync_iterate_to_convergence+0x15a/0x320 [zfs]
    spa_sync+0x2e0/0x840 [zfs]
    txg_sync_thread+0x2b1/0x3f0 [zfs]
    thread_generic_wrapper+0x62/0xa0 [spl]
    kthread+0x127/0x150
    ret_from_fork+0x22/0x30
    </TASK>

2) Calling vdev_*_stop_all() for a second time in spa_unload() after
   spa_export_common() unnecessarily delays the final TXGs beyond what
   spa_final_txg is set at.

Fix this by performing the check and call for
spa_unload_log_sm_flush_all() before the spa_final_txg is set in
spa_export_common(). Also check if the spa_final_txg has already been
set in spa_unload() and skip those calls in this case.

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: George Amanakis <gamanakis@gmail.com>
External-issue: https://www.illumos.org/issues/9081
Closes openzfs#13048 
Closes openzfs#13098
andrewc12 pushed a commit to andrewc12/openzfs that referenced this issue Aug 18, 2022
There are two codepaths than can dirty final TXGs:

1) If calling spa_export_common()->spa_unload()->
   spa_unload_log_sm_flush_all() after the spa_final_txg is set, then
   spa_sync()->spa_flush_metaslabs() may end up dirtying the final
   TXGs. Then we have the following panic:
   Call Trace:
    <TASK>
    dump_stack_lvl+0x46/0x62
    spl_panic+0xea/0x102 [spl]
    dbuf_dirty+0xcd6/0x11b0 [zfs]
    zap_lockdir_impl+0x321/0x590 [zfs]
    zap_lockdir+0xed/0x150 [zfs]
    zap_update+0x69/0x250 [zfs]
    feature_sync+0x5f/0x190 [zfs]
    space_map_alloc+0x83/0xc0 [zfs]
    spa_generate_syncing_log_sm+0x10b/0x2f0 [zfs]
    spa_flush_metaslabs+0xb2/0x350 [zfs]
    spa_sync_iterate_to_convergence+0x15a/0x320 [zfs]
    spa_sync+0x2e0/0x840 [zfs]
    txg_sync_thread+0x2b1/0x3f0 [zfs]
    thread_generic_wrapper+0x62/0xa0 [spl]
    kthread+0x127/0x150
    ret_from_fork+0x22/0x30
    </TASK>

2) Calling vdev_*_stop_all() for a second time in spa_unload() after
   spa_export_common() unnecessarily delays the final TXGs beyond what
   spa_final_txg is set at.

Fix this by performing the check and call for
spa_unload_log_sm_flush_all() before the spa_final_txg is set in
spa_export_common(). Also check if the spa_final_txg has already been
set in spa_unload() and skip those calls in this case.

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: George Amanakis <gamanakis@gmail.com>
External-issue: https://www.illumos.org/issues/9081
Closes openzfs#13048 
Closes openzfs#13098
nicman23 pushed a commit to nicman23/zfs that referenced this issue Aug 22, 2022
There are two codepaths than can dirty final TXGs:

1) If calling spa_export_common()->spa_unload()->
   spa_unload_log_sm_flush_all() after the spa_final_txg is set, then
   spa_sync()->spa_flush_metaslabs() may end up dirtying the final
   TXGs. Then we have the following panic:
   Call Trace:
    <TASK>
    dump_stack_lvl+0x46/0x62
    spl_panic+0xea/0x102 [spl]
    dbuf_dirty+0xcd6/0x11b0 [zfs]
    zap_lockdir_impl+0x321/0x590 [zfs]
    zap_lockdir+0xed/0x150 [zfs]
    zap_update+0x69/0x250 [zfs]
    feature_sync+0x5f/0x190 [zfs]
    space_map_alloc+0x83/0xc0 [zfs]
    spa_generate_syncing_log_sm+0x10b/0x2f0 [zfs]
    spa_flush_metaslabs+0xb2/0x350 [zfs]
    spa_sync_iterate_to_convergence+0x15a/0x320 [zfs]
    spa_sync+0x2e0/0x840 [zfs]
    txg_sync_thread+0x2b1/0x3f0 [zfs]
    thread_generic_wrapper+0x62/0xa0 [spl]
    kthread+0x127/0x150
    ret_from_fork+0x22/0x30
    </TASK>

2) Calling vdev_*_stop_all() for a second time in spa_unload() after
   spa_export_common() unnecessarily delays the final TXGs beyond what
   spa_final_txg is set at.

Fix this by performing the check and call for
spa_unload_log_sm_flush_all() before the spa_final_txg is set in
spa_export_common(). Also check if the spa_final_txg has already been
set in spa_unload() and skip those calls in this case.

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: George Amanakis <gamanakis@gmail.com>
External-issue: https://www.illumos.org/issues/9081
Closes openzfs#13048 
Closes openzfs#13098
nicman23 pushed a commit to nicman23/zfs that referenced this issue Aug 22, 2022
There are two codepaths than can dirty final TXGs:

1) If calling spa_export_common()->spa_unload()->
   spa_unload_log_sm_flush_all() after the spa_final_txg is set, then
   spa_sync()->spa_flush_metaslabs() may end up dirtying the final
   TXGs. Then we have the following panic:
   Call Trace:
    <TASK>
    dump_stack_lvl+0x46/0x62
    spl_panic+0xea/0x102 [spl]
    dbuf_dirty+0xcd6/0x11b0 [zfs]
    zap_lockdir_impl+0x321/0x590 [zfs]
    zap_lockdir+0xed/0x150 [zfs]
    zap_update+0x69/0x250 [zfs]
    feature_sync+0x5f/0x190 [zfs]
    space_map_alloc+0x83/0xc0 [zfs]
    spa_generate_syncing_log_sm+0x10b/0x2f0 [zfs]
    spa_flush_metaslabs+0xb2/0x350 [zfs]
    spa_sync_iterate_to_convergence+0x15a/0x320 [zfs]
    spa_sync+0x2e0/0x840 [zfs]
    txg_sync_thread+0x2b1/0x3f0 [zfs]
    thread_generic_wrapper+0x62/0xa0 [spl]
    kthread+0x127/0x150
    ret_from_fork+0x22/0x30
    </TASK>

2) Calling vdev_*_stop_all() for a second time in spa_unload() after
   spa_export_common() unnecessarily delays the final TXGs beyond what
   spa_final_txg is set at.

Fix this by performing the check and call for
spa_unload_log_sm_flush_all() before the spa_final_txg is set in
spa_export_common(). Also check if the spa_final_txg has already been
set in spa_unload() and skip those calls in this case.

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: George Amanakis <gamanakis@gmail.com>
External-issue: https://www.illumos.org/issues/9081
Closes openzfs#13048 
Closes openzfs#13098
lundman added a commit to openzfsonwindows/openzfs that referenced this issue Aug 24, 2022
* Avoid dirtying the final TXGs when exporting a pool

There are two codepaths than can dirty final TXGs:

1) If calling spa_export_common()->spa_unload()->
   spa_unload_log_sm_flush_all() after the spa_final_txg is set, then
   spa_sync()->spa_flush_metaslabs() may end up dirtying the final
   TXGs. Then we have the following panic:
   Call Trace:
    <TASK>
    dump_stack_lvl+0x46/0x62
    spl_panic+0xea/0x102 [spl]
    dbuf_dirty+0xcd6/0x11b0 [zfs]
    zap_lockdir_impl+0x321/0x590 [zfs]
    zap_lockdir+0xed/0x150 [zfs]
    zap_update+0x69/0x250 [zfs]
    feature_sync+0x5f/0x190 [zfs]
    space_map_alloc+0x83/0xc0 [zfs]
    spa_generate_syncing_log_sm+0x10b/0x2f0 [zfs]
    spa_flush_metaslabs+0xb2/0x350 [zfs]
    spa_sync_iterate_to_convergence+0x15a/0x320 [zfs]
    spa_sync+0x2e0/0x840 [zfs]
    txg_sync_thread+0x2b1/0x3f0 [zfs]
    thread_generic_wrapper+0x62/0xa0 [spl]
    kthread+0x127/0x150
    ret_from_fork+0x22/0x30
    </TASK>

2) Calling vdev_*_stop_all() for a second time in spa_unload() after
   spa_export_common() unnecessarily delays the final TXGs beyond what
   spa_final_txg is set at.

Fix this by performing the check and call for
spa_unload_log_sm_flush_all() before the spa_final_txg is set in
spa_export_common(). Also check if the spa_final_txg has already been
set in spa_unload() and skip those calls in this case.

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: George Amanakis <gamanakis@gmail.com>
External-issue: https://www.illumos.org/issues/9081
Closes openzfs#13048 
Closes openzfs#13098

* Add spa _os() hooks

Add hooks for when spa is created, exported, activated and
deactivated. Used by macOS to attach iokit, and lock
kext as busy (to stop unloads).

Userland, Linux, and, FreeBSD have empty stubs.

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Jorgen Lundman <lundman@lundman.net>
Closes openzfs#12801

* Windows: Add spa_os() hooks

use the common spa prototypes instead of the windows specific ones

Signed-off-by: Andrew Innes <andrew.c12@gmail.com>

Signed-off-by: Andrew Innes <andrew.c12@gmail.com>
Co-authored-by: George Amanakis <gamanakis@gmail.com>
Co-authored-by: Jorgen Lundman <lundman@lundman.net>
andrewc12 added a commit to andrewc12/openzfs that referenced this issue Aug 27, 2022
* Avoid dirtying the final TXGs when exporting a pool

There are two codepaths than can dirty final TXGs:

1) If calling spa_export_common()->spa_unload()->
   spa_unload_log_sm_flush_all() after the spa_final_txg is set, then
   spa_sync()->spa_flush_metaslabs() may end up dirtying the final
   TXGs. Then we have the following panic:
   Call Trace:
    <TASK>
    dump_stack_lvl+0x46/0x62
    spl_panic+0xea/0x102 [spl]
    dbuf_dirty+0xcd6/0x11b0 [zfs]
    zap_lockdir_impl+0x321/0x590 [zfs]
    zap_lockdir+0xed/0x150 [zfs]
    zap_update+0x69/0x250 [zfs]
    feature_sync+0x5f/0x190 [zfs]
    space_map_alloc+0x83/0xc0 [zfs]
    spa_generate_syncing_log_sm+0x10b/0x2f0 [zfs]
    spa_flush_metaslabs+0xb2/0x350 [zfs]
    spa_sync_iterate_to_convergence+0x15a/0x320 [zfs]
    spa_sync+0x2e0/0x840 [zfs]
    txg_sync_thread+0x2b1/0x3f0 [zfs]
    thread_generic_wrapper+0x62/0xa0 [spl]
    kthread+0x127/0x150
    ret_from_fork+0x22/0x30
    </TASK>

2) Calling vdev_*_stop_all() for a second time in spa_unload() after
   spa_export_common() unnecessarily delays the final TXGs beyond what
   spa_final_txg is set at.

Fix this by performing the check and call for
spa_unload_log_sm_flush_all() before the spa_final_txg is set in
spa_export_common(). Also check if the spa_final_txg has already been
set in spa_unload() and skip those calls in this case.

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: George Amanakis <gamanakis@gmail.com>
External-issue: https://www.illumos.org/issues/9081
Closes openzfs#13048 
Closes openzfs#13098

* Add spa _os() hooks

Add hooks for when spa is created, exported, activated and
deactivated. Used by macOS to attach iokit, and lock
kext as busy (to stop unloads).

Userland, Linux, and, FreeBSD have empty stubs.

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Jorgen Lundman <lundman@lundman.net>
Closes openzfs#12801

* Windows: Add spa_os() hooks

use the common spa prototypes instead of the windows specific ones

Signed-off-by: Andrew Innes <andrew.c12@gmail.com>

Signed-off-by: Andrew Innes <andrew.c12@gmail.com>
Co-authored-by: George Amanakis <gamanakis@gmail.com>
Co-authored-by: Jorgen Lundman <lundman@lundman.net>
andrewc12 pushed a commit to andrewc12/openzfs that referenced this issue Aug 27, 2022
There are two codepaths than can dirty final TXGs:

1) If calling spa_export_common()->spa_unload()->
   spa_unload_log_sm_flush_all() after the spa_final_txg is set, then
   spa_sync()->spa_flush_metaslabs() may end up dirtying the final
   TXGs. Then we have the following panic:
   Call Trace:
    <TASK>
    dump_stack_lvl+0x46/0x62
    spl_panic+0xea/0x102 [spl]
    dbuf_dirty+0xcd6/0x11b0 [zfs]
    zap_lockdir_impl+0x321/0x590 [zfs]
    zap_lockdir+0xed/0x150 [zfs]
    zap_update+0x69/0x250 [zfs]
    feature_sync+0x5f/0x190 [zfs]
    space_map_alloc+0x83/0xc0 [zfs]
    spa_generate_syncing_log_sm+0x10b/0x2f0 [zfs]
    spa_flush_metaslabs+0xb2/0x350 [zfs]
    spa_sync_iterate_to_convergence+0x15a/0x320 [zfs]
    spa_sync+0x2e0/0x840 [zfs]
    txg_sync_thread+0x2b1/0x3f0 [zfs]
    thread_generic_wrapper+0x62/0xa0 [spl]
    kthread+0x127/0x150
    ret_from_fork+0x22/0x30
    </TASK>

2) Calling vdev_*_stop_all() for a second time in spa_unload() after
   spa_export_common() unnecessarily delays the final TXGs beyond what
   spa_final_txg is set at.

Fix this by performing the check and call for
spa_unload_log_sm_flush_all() before the spa_final_txg is set in
spa_export_common(). Also check if the spa_final_txg has already been
set in spa_unload() and skip those calls in this case.

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: George Amanakis <gamanakis@gmail.com>
External-issue: https://www.illumos.org/issues/9081
Closes openzfs#13048 
Closes openzfs#13098
andrewc12 added a commit to andrewc12/openzfs that referenced this issue Aug 27, 2022
* Avoid dirtying the final TXGs when exporting a pool

There are two codepaths than can dirty final TXGs:

1) If calling spa_export_common()->spa_unload()->
   spa_unload_log_sm_flush_all() after the spa_final_txg is set, then
   spa_sync()->spa_flush_metaslabs() may end up dirtying the final
   TXGs. Then we have the following panic:
   Call Trace:
    <TASK>
    dump_stack_lvl+0x46/0x62
    spl_panic+0xea/0x102 [spl]
    dbuf_dirty+0xcd6/0x11b0 [zfs]
    zap_lockdir_impl+0x321/0x590 [zfs]
    zap_lockdir+0xed/0x150 [zfs]
    zap_update+0x69/0x250 [zfs]
    feature_sync+0x5f/0x190 [zfs]
    space_map_alloc+0x83/0xc0 [zfs]
    spa_generate_syncing_log_sm+0x10b/0x2f0 [zfs]
    spa_flush_metaslabs+0xb2/0x350 [zfs]
    spa_sync_iterate_to_convergence+0x15a/0x320 [zfs]
    spa_sync+0x2e0/0x840 [zfs]
    txg_sync_thread+0x2b1/0x3f0 [zfs]
    thread_generic_wrapper+0x62/0xa0 [spl]
    kthread+0x127/0x150
    ret_from_fork+0x22/0x30
    </TASK>

2) Calling vdev_*_stop_all() for a second time in spa_unload() after
   spa_export_common() unnecessarily delays the final TXGs beyond what
   spa_final_txg is set at.

Fix this by performing the check and call for
spa_unload_log_sm_flush_all() before the spa_final_txg is set in
spa_export_common(). Also check if the spa_final_txg has already been
set in spa_unload() and skip those calls in this case.

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: George Amanakis <gamanakis@gmail.com>
External-issue: https://www.illumos.org/issues/9081
Closes openzfs#13048 
Closes openzfs#13098

* Add spa _os() hooks

Add hooks for when spa is created, exported, activated and
deactivated. Used by macOS to attach iokit, and lock
kext as busy (to stop unloads).

Userland, Linux, and, FreeBSD have empty stubs.

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Jorgen Lundman <lundman@lundman.net>
Closes openzfs#12801

* Windows: Add spa_os() hooks

use the common spa prototypes instead of the windows specific ones

Signed-off-by: Andrew Innes <andrew.c12@gmail.com>

Signed-off-by: Andrew Innes <andrew.c12@gmail.com>
Co-authored-by: George Amanakis <gamanakis@gmail.com>
Co-authored-by: Jorgen Lundman <lundman@lundman.net>
andrewc12 added a commit to andrewc12/openzfs that referenced this issue Aug 27, 2022
* Avoid dirtying the final TXGs when exporting a pool

There are two codepaths than can dirty final TXGs:

1) If calling spa_export_common()->spa_unload()->
   spa_unload_log_sm_flush_all() after the spa_final_txg is set, then
   spa_sync()->spa_flush_metaslabs() may end up dirtying the final
   TXGs. Then we have the following panic:
   Call Trace:
    <TASK>
    dump_stack_lvl+0x46/0x62
    spl_panic+0xea/0x102 [spl]
    dbuf_dirty+0xcd6/0x11b0 [zfs]
    zap_lockdir_impl+0x321/0x590 [zfs]
    zap_lockdir+0xed/0x150 [zfs]
    zap_update+0x69/0x250 [zfs]
    feature_sync+0x5f/0x190 [zfs]
    space_map_alloc+0x83/0xc0 [zfs]
    spa_generate_syncing_log_sm+0x10b/0x2f0 [zfs]
    spa_flush_metaslabs+0xb2/0x350 [zfs]
    spa_sync_iterate_to_convergence+0x15a/0x320 [zfs]
    spa_sync+0x2e0/0x840 [zfs]
    txg_sync_thread+0x2b1/0x3f0 [zfs]
    thread_generic_wrapper+0x62/0xa0 [spl]
    kthread+0x127/0x150
    ret_from_fork+0x22/0x30
    </TASK>

2) Calling vdev_*_stop_all() for a second time in spa_unload() after
   spa_export_common() unnecessarily delays the final TXGs beyond what
   spa_final_txg is set at.

Fix this by performing the check and call for
spa_unload_log_sm_flush_all() before the spa_final_txg is set in
spa_export_common(). Also check if the spa_final_txg has already been
set in spa_unload() and skip those calls in this case.

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: George Amanakis <gamanakis@gmail.com>
External-issue: https://www.illumos.org/issues/9081
Closes openzfs#13048 
Closes openzfs#13098

* Add spa _os() hooks

Add hooks for when spa is created, exported, activated and
deactivated. Used by macOS to attach iokit, and lock
kext as busy (to stop unloads).

Userland, Linux, and, FreeBSD have empty stubs.

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Jorgen Lundman <lundman@lundman.net>
Closes openzfs#12801

* Windows: Add spa_os() hooks

use the common spa prototypes instead of the windows specific ones

Signed-off-by: Andrew Innes <andrew.c12@gmail.com>

Signed-off-by: Andrew Innes <andrew.c12@gmail.com>
Co-authored-by: George Amanakis <gamanakis@gmail.com>
Co-authored-by: Jorgen Lundman <lundman@lundman.net>
andrewc12 added a commit to andrewc12/openzfs that referenced this issue Aug 27, 2022
* Avoid dirtying the final TXGs when exporting a pool

There are two codepaths than can dirty final TXGs:

1) If calling spa_export_common()->spa_unload()->
   spa_unload_log_sm_flush_all() after the spa_final_txg is set, then
   spa_sync()->spa_flush_metaslabs() may end up dirtying the final
   TXGs. Then we have the following panic:
   Call Trace:
    <TASK>
    dump_stack_lvl+0x46/0x62
    spl_panic+0xea/0x102 [spl]
    dbuf_dirty+0xcd6/0x11b0 [zfs]
    zap_lockdir_impl+0x321/0x590 [zfs]
    zap_lockdir+0xed/0x150 [zfs]
    zap_update+0x69/0x250 [zfs]
    feature_sync+0x5f/0x190 [zfs]
    space_map_alloc+0x83/0xc0 [zfs]
    spa_generate_syncing_log_sm+0x10b/0x2f0 [zfs]
    spa_flush_metaslabs+0xb2/0x350 [zfs]
    spa_sync_iterate_to_convergence+0x15a/0x320 [zfs]
    spa_sync+0x2e0/0x840 [zfs]
    txg_sync_thread+0x2b1/0x3f0 [zfs]
    thread_generic_wrapper+0x62/0xa0 [spl]
    kthread+0x127/0x150
    ret_from_fork+0x22/0x30
    </TASK>

2) Calling vdev_*_stop_all() for a second time in spa_unload() after
   spa_export_common() unnecessarily delays the final TXGs beyond what
   spa_final_txg is set at.

Fix this by performing the check and call for
spa_unload_log_sm_flush_all() before the spa_final_txg is set in
spa_export_common(). Also check if the spa_final_txg has already been
set in spa_unload() and skip those calls in this case.

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: George Amanakis <gamanakis@gmail.com>
External-issue: https://www.illumos.org/issues/9081
Closes openzfs#13048 
Closes openzfs#13098

* Add spa _os() hooks

Add hooks for when spa is created, exported, activated and
deactivated. Used by macOS to attach iokit, and lock
kext as busy (to stop unloads).

Userland, Linux, and, FreeBSD have empty stubs.

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Jorgen Lundman <lundman@lundman.net>
Closes openzfs#12801

* Windows: Add spa_os() hooks

use the common spa prototypes instead of the windows specific ones

Signed-off-by: Andrew Innes <andrew.c12@gmail.com>

Signed-off-by: Andrew Innes <andrew.c12@gmail.com>
Co-authored-by: George Amanakis <gamanakis@gmail.com>
Co-authored-by: Jorgen Lundman <lundman@lundman.net>
andrewc12 added a commit to andrewc12/openzfs that referenced this issue Aug 27, 2022
* Avoid dirtying the final TXGs when exporting a pool

There are two codepaths than can dirty final TXGs:

1) If calling spa_export_common()->spa_unload()->
   spa_unload_log_sm_flush_all() after the spa_final_txg is set, then
   spa_sync()->spa_flush_metaslabs() may end up dirtying the final
   TXGs. Then we have the following panic:
   Call Trace:
    <TASK>
    dump_stack_lvl+0x46/0x62
    spl_panic+0xea/0x102 [spl]
    dbuf_dirty+0xcd6/0x11b0 [zfs]
    zap_lockdir_impl+0x321/0x590 [zfs]
    zap_lockdir+0xed/0x150 [zfs]
    zap_update+0x69/0x250 [zfs]
    feature_sync+0x5f/0x190 [zfs]
    space_map_alloc+0x83/0xc0 [zfs]
    spa_generate_syncing_log_sm+0x10b/0x2f0 [zfs]
    spa_flush_metaslabs+0xb2/0x350 [zfs]
    spa_sync_iterate_to_convergence+0x15a/0x320 [zfs]
    spa_sync+0x2e0/0x840 [zfs]
    txg_sync_thread+0x2b1/0x3f0 [zfs]
    thread_generic_wrapper+0x62/0xa0 [spl]
    kthread+0x127/0x150
    ret_from_fork+0x22/0x30
    </TASK>

2) Calling vdev_*_stop_all() for a second time in spa_unload() after
   spa_export_common() unnecessarily delays the final TXGs beyond what
   spa_final_txg is set at.

Fix this by performing the check and call for
spa_unload_log_sm_flush_all() before the spa_final_txg is set in
spa_export_common(). Also check if the spa_final_txg has already been
set in spa_unload() and skip those calls in this case.

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: George Amanakis <gamanakis@gmail.com>
External-issue: https://www.illumos.org/issues/9081
Closes openzfs#13048 
Closes openzfs#13098

* Add spa _os() hooks

Add hooks for when spa is created, exported, activated and
deactivated. Used by macOS to attach iokit, and lock
kext as busy (to stop unloads).

Userland, Linux, and, FreeBSD have empty stubs.

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Jorgen Lundman <lundman@lundman.net>
Closes openzfs#12801

* Windows: Add spa_os() hooks

use the common spa prototypes instead of the windows specific ones

Signed-off-by: Andrew Innes <andrew.c12@gmail.com>

Signed-off-by: Andrew Innes <andrew.c12@gmail.com>
Co-authored-by: George Amanakis <gamanakis@gmail.com>
Co-authored-by: Jorgen Lundman <lundman@lundman.net>
andrewc12 added a commit to andrewc12/openzfs that referenced this issue Aug 27, 2022
* Avoid dirtying the final TXGs when exporting a pool

There are two codepaths than can dirty final TXGs:

1) If calling spa_export_common()->spa_unload()->
   spa_unload_log_sm_flush_all() after the spa_final_txg is set, then
   spa_sync()->spa_flush_metaslabs() may end up dirtying the final
   TXGs. Then we have the following panic:
   Call Trace:
    <TASK>
    dump_stack_lvl+0x46/0x62
    spl_panic+0xea/0x102 [spl]
    dbuf_dirty+0xcd6/0x11b0 [zfs]
    zap_lockdir_impl+0x321/0x590 [zfs]
    zap_lockdir+0xed/0x150 [zfs]
    zap_update+0x69/0x250 [zfs]
    feature_sync+0x5f/0x190 [zfs]
    space_map_alloc+0x83/0xc0 [zfs]
    spa_generate_syncing_log_sm+0x10b/0x2f0 [zfs]
    spa_flush_metaslabs+0xb2/0x350 [zfs]
    spa_sync_iterate_to_convergence+0x15a/0x320 [zfs]
    spa_sync+0x2e0/0x840 [zfs]
    txg_sync_thread+0x2b1/0x3f0 [zfs]
    thread_generic_wrapper+0x62/0xa0 [spl]
    kthread+0x127/0x150
    ret_from_fork+0x22/0x30
    </TASK>

2) Calling vdev_*_stop_all() for a second time in spa_unload() after
   spa_export_common() unnecessarily delays the final TXGs beyond what
   spa_final_txg is set at.

Fix this by performing the check and call for
spa_unload_log_sm_flush_all() before the spa_final_txg is set in
spa_export_common(). Also check if the spa_final_txg has already been
set in spa_unload() and skip those calls in this case.

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: George Amanakis <gamanakis@gmail.com>
External-issue: https://www.illumos.org/issues/9081
Closes openzfs#13048 
Closes openzfs#13098

* Add spa _os() hooks

Add hooks for when spa is created, exported, activated and
deactivated. Used by macOS to attach iokit, and lock
kext as busy (to stop unloads).

Userland, Linux, and, FreeBSD have empty stubs.

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Jorgen Lundman <lundman@lundman.net>
Closes openzfs#12801

* Windows: Add spa_os() hooks

use the common spa prototypes instead of the windows specific ones

Signed-off-by: Andrew Innes <andrew.c12@gmail.com>

Signed-off-by: Andrew Innes <andrew.c12@gmail.com>
Co-authored-by: George Amanakis <gamanakis@gmail.com>
Co-authored-by: Jorgen Lundman <lundman@lundman.net>
andrewc12 added a commit to andrewc12/openzfs that referenced this issue Aug 27, 2022
* Avoid dirtying the final TXGs when exporting a pool

There are two codepaths than can dirty final TXGs:

1) If calling spa_export_common()->spa_unload()->
   spa_unload_log_sm_flush_all() after the spa_final_txg is set, then
   spa_sync()->spa_flush_metaslabs() may end up dirtying the final
   TXGs. Then we have the following panic:
   Call Trace:
    <TASK>
    dump_stack_lvl+0x46/0x62
    spl_panic+0xea/0x102 [spl]
    dbuf_dirty+0xcd6/0x11b0 [zfs]
    zap_lockdir_impl+0x321/0x590 [zfs]
    zap_lockdir+0xed/0x150 [zfs]
    zap_update+0x69/0x250 [zfs]
    feature_sync+0x5f/0x190 [zfs]
    space_map_alloc+0x83/0xc0 [zfs]
    spa_generate_syncing_log_sm+0x10b/0x2f0 [zfs]
    spa_flush_metaslabs+0xb2/0x350 [zfs]
    spa_sync_iterate_to_convergence+0x15a/0x320 [zfs]
    spa_sync+0x2e0/0x840 [zfs]
    txg_sync_thread+0x2b1/0x3f0 [zfs]
    thread_generic_wrapper+0x62/0xa0 [spl]
    kthread+0x127/0x150
    ret_from_fork+0x22/0x30
    </TASK>

2) Calling vdev_*_stop_all() for a second time in spa_unload() after
   spa_export_common() unnecessarily delays the final TXGs beyond what
   spa_final_txg is set at.

Fix this by performing the check and call for
spa_unload_log_sm_flush_all() before the spa_final_txg is set in
spa_export_common(). Also check if the spa_final_txg has already been
set in spa_unload() and skip those calls in this case.

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: George Amanakis <gamanakis@gmail.com>
External-issue: https://www.illumos.org/issues/9081
Closes openzfs#13048 
Closes openzfs#13098

* Add spa _os() hooks

Add hooks for when spa is created, exported, activated and
deactivated. Used by macOS to attach iokit, and lock
kext as busy (to stop unloads).

Userland, Linux, and, FreeBSD have empty stubs.

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Jorgen Lundman <lundman@lundman.net>
Closes openzfs#12801

* Windows: Add spa_os() hooks

use the common spa prototypes instead of the windows specific ones

Signed-off-by: Andrew Innes <andrew.c12@gmail.com>

Signed-off-by: Andrew Innes <andrew.c12@gmail.com>
Co-authored-by: George Amanakis <gamanakis@gmail.com>
Co-authored-by: Jorgen Lundman <lundman@lundman.net>
andrewc12 added a commit to andrewc12/openzfs that referenced this issue Aug 27, 2022
* Avoid dirtying the final TXGs when exporting a pool

There are two codepaths than can dirty final TXGs:

1) If calling spa_export_common()->spa_unload()->
   spa_unload_log_sm_flush_all() after the spa_final_txg is set, then
   spa_sync()->spa_flush_metaslabs() may end up dirtying the final
   TXGs. Then we have the following panic:
   Call Trace:
    <TASK>
    dump_stack_lvl+0x46/0x62
    spl_panic+0xea/0x102 [spl]
    dbuf_dirty+0xcd6/0x11b0 [zfs]
    zap_lockdir_impl+0x321/0x590 [zfs]
    zap_lockdir+0xed/0x150 [zfs]
    zap_update+0x69/0x250 [zfs]
    feature_sync+0x5f/0x190 [zfs]
    space_map_alloc+0x83/0xc0 [zfs]
    spa_generate_syncing_log_sm+0x10b/0x2f0 [zfs]
    spa_flush_metaslabs+0xb2/0x350 [zfs]
    spa_sync_iterate_to_convergence+0x15a/0x320 [zfs]
    spa_sync+0x2e0/0x840 [zfs]
    txg_sync_thread+0x2b1/0x3f0 [zfs]
    thread_generic_wrapper+0x62/0xa0 [spl]
    kthread+0x127/0x150
    ret_from_fork+0x22/0x30
    </TASK>

2) Calling vdev_*_stop_all() for a second time in spa_unload() after
   spa_export_common() unnecessarily delays the final TXGs beyond what
   spa_final_txg is set at.

Fix this by performing the check and call for
spa_unload_log_sm_flush_all() before the spa_final_txg is set in
spa_export_common(). Also check if the spa_final_txg has already been
set in spa_unload() and skip those calls in this case.

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: George Amanakis <gamanakis@gmail.com>
External-issue: https://www.illumos.org/issues/9081
Closes openzfs#13048 
Closes openzfs#13098

* Add spa _os() hooks

Add hooks for when spa is created, exported, activated and
deactivated. Used by macOS to attach iokit, and lock
kext as busy (to stop unloads).

Userland, Linux, and, FreeBSD have empty stubs.

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Jorgen Lundman <lundman@lundman.net>
Closes openzfs#12801

* Windows: Add spa_os() hooks

use the common spa prototypes instead of the windows specific ones

Signed-off-by: Andrew Innes <andrew.c12@gmail.com>

Signed-off-by: Andrew Innes <andrew.c12@gmail.com>
Co-authored-by: George Amanakis <gamanakis@gmail.com>
Co-authored-by: Jorgen Lundman <lundman@lundman.net>
andrewc12 added a commit to andrewc12/openzfs that referenced this issue Aug 27, 2022
* Avoid dirtying the final TXGs when exporting a pool

There are two codepaths than can dirty final TXGs:

1) If calling spa_export_common()->spa_unload()->
   spa_unload_log_sm_flush_all() after the spa_final_txg is set, then
   spa_sync()->spa_flush_metaslabs() may end up dirtying the final
   TXGs. Then we have the following panic:
   Call Trace:
    <TASK>
    dump_stack_lvl+0x46/0x62
    spl_panic+0xea/0x102 [spl]
    dbuf_dirty+0xcd6/0x11b0 [zfs]
    zap_lockdir_impl+0x321/0x590 [zfs]
    zap_lockdir+0xed/0x150 [zfs]
    zap_update+0x69/0x250 [zfs]
    feature_sync+0x5f/0x190 [zfs]
    space_map_alloc+0x83/0xc0 [zfs]
    spa_generate_syncing_log_sm+0x10b/0x2f0 [zfs]
    spa_flush_metaslabs+0xb2/0x350 [zfs]
    spa_sync_iterate_to_convergence+0x15a/0x320 [zfs]
    spa_sync+0x2e0/0x840 [zfs]
    txg_sync_thread+0x2b1/0x3f0 [zfs]
    thread_generic_wrapper+0x62/0xa0 [spl]
    kthread+0x127/0x150
    ret_from_fork+0x22/0x30
    </TASK>

2) Calling vdev_*_stop_all() for a second time in spa_unload() after
   spa_export_common() unnecessarily delays the final TXGs beyond what
   spa_final_txg is set at.

Fix this by performing the check and call for
spa_unload_log_sm_flush_all() before the spa_final_txg is set in
spa_export_common(). Also check if the spa_final_txg has already been
set in spa_unload() and skip those calls in this case.

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: George Amanakis <gamanakis@gmail.com>
External-issue: https://www.illumos.org/issues/9081
Closes openzfs#13048 
Closes openzfs#13098

* Add spa _os() hooks

Add hooks for when spa is created, exported, activated and
deactivated. Used by macOS to attach iokit, and lock
kext as busy (to stop unloads).

Userland, Linux, and, FreeBSD have empty stubs.

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Jorgen Lundman <lundman@lundman.net>
Closes openzfs#12801

* Windows: Add spa_os() hooks

use the common spa prototypes instead of the windows specific ones

Signed-off-by: Andrew Innes <andrew.c12@gmail.com>

Signed-off-by: Andrew Innes <andrew.c12@gmail.com>
Co-authored-by: George Amanakis <gamanakis@gmail.com>
Co-authored-by: Jorgen Lundman <lundman@lundman.net>
andrewc12 added a commit to andrewc12/openzfs that referenced this issue Aug 27, 2022
* Avoid dirtying the final TXGs when exporting a pool

There are two codepaths than can dirty final TXGs:

1) If calling spa_export_common()->spa_unload()->
   spa_unload_log_sm_flush_all() after the spa_final_txg is set, then
   spa_sync()->spa_flush_metaslabs() may end up dirtying the final
   TXGs. Then we have the following panic:
   Call Trace:
    <TASK>
    dump_stack_lvl+0x46/0x62
    spl_panic+0xea/0x102 [spl]
    dbuf_dirty+0xcd6/0x11b0 [zfs]
    zap_lockdir_impl+0x321/0x590 [zfs]
    zap_lockdir+0xed/0x150 [zfs]
    zap_update+0x69/0x250 [zfs]
    feature_sync+0x5f/0x190 [zfs]
    space_map_alloc+0x83/0xc0 [zfs]
    spa_generate_syncing_log_sm+0x10b/0x2f0 [zfs]
    spa_flush_metaslabs+0xb2/0x350 [zfs]
    spa_sync_iterate_to_convergence+0x15a/0x320 [zfs]
    spa_sync+0x2e0/0x840 [zfs]
    txg_sync_thread+0x2b1/0x3f0 [zfs]
    thread_generic_wrapper+0x62/0xa0 [spl]
    kthread+0x127/0x150
    ret_from_fork+0x22/0x30
    </TASK>

2) Calling vdev_*_stop_all() for a second time in spa_unload() after
   spa_export_common() unnecessarily delays the final TXGs beyond what
   spa_final_txg is set at.

Fix this by performing the check and call for
spa_unload_log_sm_flush_all() before the spa_final_txg is set in
spa_export_common(). Also check if the spa_final_txg has already been
set in spa_unload() and skip those calls in this case.

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: George Amanakis <gamanakis@gmail.com>
External-issue: https://www.illumos.org/issues/9081
Closes openzfs#13048 
Closes openzfs#13098

* Add spa _os() hooks

Add hooks for when spa is created, exported, activated and
deactivated. Used by macOS to attach iokit, and lock
kext as busy (to stop unloads).

Userland, Linux, and, FreeBSD have empty stubs.

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Jorgen Lundman <lundman@lundman.net>
Closes openzfs#12801

* Windows: Add spa_os() hooks

use the common spa prototypes instead of the windows specific ones

Signed-off-by: Andrew Innes <andrew.c12@gmail.com>

Signed-off-by: Andrew Innes <andrew.c12@gmail.com>
Co-authored-by: George Amanakis <gamanakis@gmail.com>
Co-authored-by: Jorgen Lundman <lundman@lundman.net>
andrewc12 added a commit to andrewc12/openzfs that referenced this issue Aug 27, 2022
* Avoid dirtying the final TXGs when exporting a pool

There are two codepaths than can dirty final TXGs:

1) If calling spa_export_common()->spa_unload()->
   spa_unload_log_sm_flush_all() after the spa_final_txg is set, then
   spa_sync()->spa_flush_metaslabs() may end up dirtying the final
   TXGs. Then we have the following panic:
   Call Trace:
    <TASK>
    dump_stack_lvl+0x46/0x62
    spl_panic+0xea/0x102 [spl]
    dbuf_dirty+0xcd6/0x11b0 [zfs]
    zap_lockdir_impl+0x321/0x590 [zfs]
    zap_lockdir+0xed/0x150 [zfs]
    zap_update+0x69/0x250 [zfs]
    feature_sync+0x5f/0x190 [zfs]
    space_map_alloc+0x83/0xc0 [zfs]
    spa_generate_syncing_log_sm+0x10b/0x2f0 [zfs]
    spa_flush_metaslabs+0xb2/0x350 [zfs]
    spa_sync_iterate_to_convergence+0x15a/0x320 [zfs]
    spa_sync+0x2e0/0x840 [zfs]
    txg_sync_thread+0x2b1/0x3f0 [zfs]
    thread_generic_wrapper+0x62/0xa0 [spl]
    kthread+0x127/0x150
    ret_from_fork+0x22/0x30
    </TASK>

2) Calling vdev_*_stop_all() for a second time in spa_unload() after
   spa_export_common() unnecessarily delays the final TXGs beyond what
   spa_final_txg is set at.

Fix this by performing the check and call for
spa_unload_log_sm_flush_all() before the spa_final_txg is set in
spa_export_common(). Also check if the spa_final_txg has already been
set in spa_unload() and skip those calls in this case.

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: George Amanakis <gamanakis@gmail.com>
External-issue: https://www.illumos.org/issues/9081
Closes openzfs#13048 
Closes openzfs#13098

* Add spa _os() hooks

Add hooks for when spa is created, exported, activated and
deactivated. Used by macOS to attach iokit, and lock
kext as busy (to stop unloads).

Userland, Linux, and, FreeBSD have empty stubs.

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Jorgen Lundman <lundman@lundman.net>
Closes openzfs#12801

* Windows: Add spa_os() hooks

use the common spa prototypes instead of the windows specific ones

Signed-off-by: Andrew Innes <andrew.c12@gmail.com>

Signed-off-by: Andrew Innes <andrew.c12@gmail.com>
Co-authored-by: George Amanakis <gamanakis@gmail.com>
Co-authored-by: Jorgen Lundman <lundman@lundman.net>
andrewc12 added a commit to andrewc12/openzfs that referenced this issue Aug 27, 2022
* Avoid dirtying the final TXGs when exporting a pool

There are two codepaths than can dirty final TXGs:

1) If calling spa_export_common()->spa_unload()->
   spa_unload_log_sm_flush_all() after the spa_final_txg is set, then
   spa_sync()->spa_flush_metaslabs() may end up dirtying the final
   TXGs. Then we have the following panic:
   Call Trace:
    <TASK>
    dump_stack_lvl+0x46/0x62
    spl_panic+0xea/0x102 [spl]
    dbuf_dirty+0xcd6/0x11b0 [zfs]
    zap_lockdir_impl+0x321/0x590 [zfs]
    zap_lockdir+0xed/0x150 [zfs]
    zap_update+0x69/0x250 [zfs]
    feature_sync+0x5f/0x190 [zfs]
    space_map_alloc+0x83/0xc0 [zfs]
    spa_generate_syncing_log_sm+0x10b/0x2f0 [zfs]
    spa_flush_metaslabs+0xb2/0x350 [zfs]
    spa_sync_iterate_to_convergence+0x15a/0x320 [zfs]
    spa_sync+0x2e0/0x840 [zfs]
    txg_sync_thread+0x2b1/0x3f0 [zfs]
    thread_generic_wrapper+0x62/0xa0 [spl]
    kthread+0x127/0x150
    ret_from_fork+0x22/0x30
    </TASK>

2) Calling vdev_*_stop_all() for a second time in spa_unload() after
   spa_export_common() unnecessarily delays the final TXGs beyond what
   spa_final_txg is set at.

Fix this by performing the check and call for
spa_unload_log_sm_flush_all() before the spa_final_txg is set in
spa_export_common(). Also check if the spa_final_txg has already been
set in spa_unload() and skip those calls in this case.

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: George Amanakis <gamanakis@gmail.com>
External-issue: https://www.illumos.org/issues/9081
Closes openzfs#13048 
Closes openzfs#13098

* Add spa _os() hooks

Add hooks for when spa is created, exported, activated and
deactivated. Used by macOS to attach iokit, and lock
kext as busy (to stop unloads).

Userland, Linux, and, FreeBSD have empty stubs.

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Jorgen Lundman <lundman@lundman.net>
Closes openzfs#12801

* Windows: Add spa_os() hooks

use the common spa prototypes instead of the windows specific ones

Signed-off-by: Andrew Innes <andrew.c12@gmail.com>

Signed-off-by: Andrew Innes <andrew.c12@gmail.com>
Co-authored-by: George Amanakis <gamanakis@gmail.com>
Co-authored-by: Jorgen Lundman <lundman@lundman.net>
andrewc12 added a commit to andrewc12/openzfs that referenced this issue Sep 23, 2022
* Avoid dirtying the final TXGs when exporting a pool

There are two codepaths than can dirty final TXGs:

1) If calling spa_export_common()->spa_unload()->
   spa_unload_log_sm_flush_all() after the spa_final_txg is set, then
   spa_sync()->spa_flush_metaslabs() may end up dirtying the final
   TXGs. Then we have the following panic:
   Call Trace:
    <TASK>
    dump_stack_lvl+0x46/0x62
    spl_panic+0xea/0x102 [spl]
    dbuf_dirty+0xcd6/0x11b0 [zfs]
    zap_lockdir_impl+0x321/0x590 [zfs]
    zap_lockdir+0xed/0x150 [zfs]
    zap_update+0x69/0x250 [zfs]
    feature_sync+0x5f/0x190 [zfs]
    space_map_alloc+0x83/0xc0 [zfs]
    spa_generate_syncing_log_sm+0x10b/0x2f0 [zfs]
    spa_flush_metaslabs+0xb2/0x350 [zfs]
    spa_sync_iterate_to_convergence+0x15a/0x320 [zfs]
    spa_sync+0x2e0/0x840 [zfs]
    txg_sync_thread+0x2b1/0x3f0 [zfs]
    thread_generic_wrapper+0x62/0xa0 [spl]
    kthread+0x127/0x150
    ret_from_fork+0x22/0x30
    </TASK>

2) Calling vdev_*_stop_all() for a second time in spa_unload() after
   spa_export_common() unnecessarily delays the final TXGs beyond what
   spa_final_txg is set at.

Fix this by performing the check and call for
spa_unload_log_sm_flush_all() before the spa_final_txg is set in
spa_export_common(). Also check if the spa_final_txg has already been
set in spa_unload() and skip those calls in this case.

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: George Amanakis <gamanakis@gmail.com>
External-issue: https://www.illumos.org/issues/9081
Closes openzfs#13048 
Closes openzfs#13098

* Add spa _os() hooks

Add hooks for when spa is created, exported, activated and
deactivated. Used by macOS to attach iokit, and lock
kext as busy (to stop unloads).

Userland, Linux, and, FreeBSD have empty stubs.

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Jorgen Lundman <lundman@lundman.net>
Closes openzfs#12801

* Windows: Add spa_os() hooks

use the common spa prototypes instead of the windows specific ones

Signed-off-by: Andrew Innes <andrew.c12@gmail.com>

Signed-off-by: Andrew Innes <andrew.c12@gmail.com>
Co-authored-by: George Amanakis <gamanakis@gmail.com>
Co-authored-by: Jorgen Lundman <lundman@lundman.net>
andrewc12 added a commit to andrewc12/openzfs that referenced this issue Sep 23, 2022
* Avoid dirtying the final TXGs when exporting a pool

There are two codepaths than can dirty final TXGs:

1) If calling spa_export_common()->spa_unload()->
   spa_unload_log_sm_flush_all() after the spa_final_txg is set, then
   spa_sync()->spa_flush_metaslabs() may end up dirtying the final
   TXGs. Then we have the following panic:
   Call Trace:
    <TASK>
    dump_stack_lvl+0x46/0x62
    spl_panic+0xea/0x102 [spl]
    dbuf_dirty+0xcd6/0x11b0 [zfs]
    zap_lockdir_impl+0x321/0x590 [zfs]
    zap_lockdir+0xed/0x150 [zfs]
    zap_update+0x69/0x250 [zfs]
    feature_sync+0x5f/0x190 [zfs]
    space_map_alloc+0x83/0xc0 [zfs]
    spa_generate_syncing_log_sm+0x10b/0x2f0 [zfs]
    spa_flush_metaslabs+0xb2/0x350 [zfs]
    spa_sync_iterate_to_convergence+0x15a/0x320 [zfs]
    spa_sync+0x2e0/0x840 [zfs]
    txg_sync_thread+0x2b1/0x3f0 [zfs]
    thread_generic_wrapper+0x62/0xa0 [spl]
    kthread+0x127/0x150
    ret_from_fork+0x22/0x30
    </TASK>

2) Calling vdev_*_stop_all() for a second time in spa_unload() after
   spa_export_common() unnecessarily delays the final TXGs beyond what
   spa_final_txg is set at.

Fix this by performing the check and call for
spa_unload_log_sm_flush_all() before the spa_final_txg is set in
spa_export_common(). Also check if the spa_final_txg has already been
set in spa_unload() and skip those calls in this case.

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: George Amanakis <gamanakis@gmail.com>
External-issue: https://www.illumos.org/issues/9081
Closes openzfs#13048 
Closes openzfs#13098

* Add spa _os() hooks

Add hooks for when spa is created, exported, activated and
deactivated. Used by macOS to attach iokit, and lock
kext as busy (to stop unloads).

Userland, Linux, and, FreeBSD have empty stubs.

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Jorgen Lundman <lundman@lundman.net>
Closes openzfs#12801

* Windows: Add spa_os() hooks

use the common spa prototypes instead of the windows specific ones

Signed-off-by: Andrew Innes <andrew.c12@gmail.com>

Signed-off-by: Andrew Innes <andrew.c12@gmail.com>
Co-authored-by: George Amanakis <gamanakis@gmail.com>
Co-authored-by: Jorgen Lundman <lundman@lundman.net>
andrewc12 added a commit to andrewc12/openzfs that referenced this issue Sep 23, 2022
* Avoid dirtying the final TXGs when exporting a pool

There are two codepaths than can dirty final TXGs:

1) If calling spa_export_common()->spa_unload()->
   spa_unload_log_sm_flush_all() after the spa_final_txg is set, then
   spa_sync()->spa_flush_metaslabs() may end up dirtying the final
   TXGs. Then we have the following panic:
   Call Trace:
    <TASK>
    dump_stack_lvl+0x46/0x62
    spl_panic+0xea/0x102 [spl]
    dbuf_dirty+0xcd6/0x11b0 [zfs]
    zap_lockdir_impl+0x321/0x590 [zfs]
    zap_lockdir+0xed/0x150 [zfs]
    zap_update+0x69/0x250 [zfs]
    feature_sync+0x5f/0x190 [zfs]
    space_map_alloc+0x83/0xc0 [zfs]
    spa_generate_syncing_log_sm+0x10b/0x2f0 [zfs]
    spa_flush_metaslabs+0xb2/0x350 [zfs]
    spa_sync_iterate_to_convergence+0x15a/0x320 [zfs]
    spa_sync+0x2e0/0x840 [zfs]
    txg_sync_thread+0x2b1/0x3f0 [zfs]
    thread_generic_wrapper+0x62/0xa0 [spl]
    kthread+0x127/0x150
    ret_from_fork+0x22/0x30
    </TASK>

2) Calling vdev_*_stop_all() for a second time in spa_unload() after
   spa_export_common() unnecessarily delays the final TXGs beyond what
   spa_final_txg is set at.

Fix this by performing the check and call for
spa_unload_log_sm_flush_all() before the spa_final_txg is set in
spa_export_common(). Also check if the spa_final_txg has already been
set in spa_unload() and skip those calls in this case.

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: George Amanakis <gamanakis@gmail.com>
External-issue: https://www.illumos.org/issues/9081
Closes openzfs#13048 
Closes openzfs#13098

* Add spa _os() hooks

Add hooks for when spa is created, exported, activated and
deactivated. Used by macOS to attach iokit, and lock
kext as busy (to stop unloads).

Userland, Linux, and, FreeBSD have empty stubs.

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Jorgen Lundman <lundman@lundman.net>
Closes openzfs#12801

* Windows: Add spa_os() hooks

use the common spa prototypes instead of the windows specific ones

Signed-off-by: Andrew Innes <andrew.c12@gmail.com>

Signed-off-by: Andrew Innes <andrew.c12@gmail.com>
Co-authored-by: George Amanakis <gamanakis@gmail.com>
Co-authored-by: Jorgen Lundman <lundman@lundman.net>
andrewc12 added a commit to andrewc12/openzfs that referenced this issue Sep 23, 2022
* Avoid dirtying the final TXGs when exporting a pool

There are two codepaths than can dirty final TXGs:

1) If calling spa_export_common()->spa_unload()->
   spa_unload_log_sm_flush_all() after the spa_final_txg is set, then
   spa_sync()->spa_flush_metaslabs() may end up dirtying the final
   TXGs. Then we have the following panic:
   Call Trace:
    <TASK>
    dump_stack_lvl+0x46/0x62
    spl_panic+0xea/0x102 [spl]
    dbuf_dirty+0xcd6/0x11b0 [zfs]
    zap_lockdir_impl+0x321/0x590 [zfs]
    zap_lockdir+0xed/0x150 [zfs]
    zap_update+0x69/0x250 [zfs]
    feature_sync+0x5f/0x190 [zfs]
    space_map_alloc+0x83/0xc0 [zfs]
    spa_generate_syncing_log_sm+0x10b/0x2f0 [zfs]
    spa_flush_metaslabs+0xb2/0x350 [zfs]
    spa_sync_iterate_to_convergence+0x15a/0x320 [zfs]
    spa_sync+0x2e0/0x840 [zfs]
    txg_sync_thread+0x2b1/0x3f0 [zfs]
    thread_generic_wrapper+0x62/0xa0 [spl]
    kthread+0x127/0x150
    ret_from_fork+0x22/0x30
    </TASK>

2) Calling vdev_*_stop_all() for a second time in spa_unload() after
   spa_export_common() unnecessarily delays the final TXGs beyond what
   spa_final_txg is set at.

Fix this by performing the check and call for
spa_unload_log_sm_flush_all() before the spa_final_txg is set in
spa_export_common(). Also check if the spa_final_txg has already been
set in spa_unload() and skip those calls in this case.

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: George Amanakis <gamanakis@gmail.com>
External-issue: https://www.illumos.org/issues/9081
Closes openzfs#13048 
Closes openzfs#13098

* Add spa _os() hooks

Add hooks for when spa is created, exported, activated and
deactivated. Used by macOS to attach iokit, and lock
kext as busy (to stop unloads).

Userland, Linux, and, FreeBSD have empty stubs.

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Jorgen Lundman <lundman@lundman.net>
Closes openzfs#12801

* Windows: Add spa_os() hooks

use the common spa prototypes instead of the windows specific ones

Signed-off-by: Andrew Innes <andrew.c12@gmail.com>

Signed-off-by: Andrew Innes <andrew.c12@gmail.com>
Co-authored-by: George Amanakis <gamanakis@gmail.com>
Co-authored-by: Jorgen Lundman <lundman@lundman.net>
andrewc12 added a commit to andrewc12/openzfs that referenced this issue Sep 23, 2022
* Avoid dirtying the final TXGs when exporting a pool

There are two codepaths than can dirty final TXGs:

1) If calling spa_export_common()->spa_unload()->
   spa_unload_log_sm_flush_all() after the spa_final_txg is set, then
   spa_sync()->spa_flush_metaslabs() may end up dirtying the final
   TXGs. Then we have the following panic:
   Call Trace:
    <TASK>
    dump_stack_lvl+0x46/0x62
    spl_panic+0xea/0x102 [spl]
    dbuf_dirty+0xcd6/0x11b0 [zfs]
    zap_lockdir_impl+0x321/0x590 [zfs]
    zap_lockdir+0xed/0x150 [zfs]
    zap_update+0x69/0x250 [zfs]
    feature_sync+0x5f/0x190 [zfs]
    space_map_alloc+0x83/0xc0 [zfs]
    spa_generate_syncing_log_sm+0x10b/0x2f0 [zfs]
    spa_flush_metaslabs+0xb2/0x350 [zfs]
    spa_sync_iterate_to_convergence+0x15a/0x320 [zfs]
    spa_sync+0x2e0/0x840 [zfs]
    txg_sync_thread+0x2b1/0x3f0 [zfs]
    thread_generic_wrapper+0x62/0xa0 [spl]
    kthread+0x127/0x150
    ret_from_fork+0x22/0x30
    </TASK>

2) Calling vdev_*_stop_all() for a second time in spa_unload() after
   spa_export_common() unnecessarily delays the final TXGs beyond what
   spa_final_txg is set at.

Fix this by performing the check and call for
spa_unload_log_sm_flush_all() before the spa_final_txg is set in
spa_export_common(). Also check if the spa_final_txg has already been
set in spa_unload() and skip those calls in this case.

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: George Amanakis <gamanakis@gmail.com>
External-issue: https://www.illumos.org/issues/9081
Closes openzfs#13048 
Closes openzfs#13098

* Add spa _os() hooks

Add hooks for when spa is created, exported, activated and
deactivated. Used by macOS to attach iokit, and lock
kext as busy (to stop unloads).

Userland, Linux, and, FreeBSD have empty stubs.

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Jorgen Lundman <lundman@lundman.net>
Closes openzfs#12801

* Windows: Add spa_os() hooks

use the common spa prototypes instead of the windows specific ones

Signed-off-by: Andrew Innes <andrew.c12@gmail.com>

Signed-off-by: Andrew Innes <andrew.c12@gmail.com>
Co-authored-by: George Amanakis <gamanakis@gmail.com>
Co-authored-by: Jorgen Lundman <lundman@lundman.net>
andrewc12 added a commit to andrewc12/openzfs that referenced this issue Sep 23, 2022
* Avoid dirtying the final TXGs when exporting a pool

There are two codepaths than can dirty final TXGs:

1) If calling spa_export_common()->spa_unload()->
   spa_unload_log_sm_flush_all() after the spa_final_txg is set, then
   spa_sync()->spa_flush_metaslabs() may end up dirtying the final
   TXGs. Then we have the following panic:
   Call Trace:
    <TASK>
    dump_stack_lvl+0x46/0x62
    spl_panic+0xea/0x102 [spl]
    dbuf_dirty+0xcd6/0x11b0 [zfs]
    zap_lockdir_impl+0x321/0x590 [zfs]
    zap_lockdir+0xed/0x150 [zfs]
    zap_update+0x69/0x250 [zfs]
    feature_sync+0x5f/0x190 [zfs]
    space_map_alloc+0x83/0xc0 [zfs]
    spa_generate_syncing_log_sm+0x10b/0x2f0 [zfs]
    spa_flush_metaslabs+0xb2/0x350 [zfs]
    spa_sync_iterate_to_convergence+0x15a/0x320 [zfs]
    spa_sync+0x2e0/0x840 [zfs]
    txg_sync_thread+0x2b1/0x3f0 [zfs]
    thread_generic_wrapper+0x62/0xa0 [spl]
    kthread+0x127/0x150
    ret_from_fork+0x22/0x30
    </TASK>

2) Calling vdev_*_stop_all() for a second time in spa_unload() after
   spa_export_common() unnecessarily delays the final TXGs beyond what
   spa_final_txg is set at.

Fix this by performing the check and call for
spa_unload_log_sm_flush_all() before the spa_final_txg is set in
spa_export_common(). Also check if the spa_final_txg has already been
set in spa_unload() and skip those calls in this case.

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: George Amanakis <gamanakis@gmail.com>
External-issue: https://www.illumos.org/issues/9081
Closes openzfs#13048 
Closes openzfs#13098

* Add spa _os() hooks

Add hooks for when spa is created, exported, activated and
deactivated. Used by macOS to attach iokit, and lock
kext as busy (to stop unloads).

Userland, Linux, and, FreeBSD have empty stubs.

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Jorgen Lundman <lundman@lundman.net>
Closes openzfs#12801

* Windows: Add spa_os() hooks

use the common spa prototypes instead of the windows specific ones

Signed-off-by: Andrew Innes <andrew.c12@gmail.com>

Signed-off-by: Andrew Innes <andrew.c12@gmail.com>
Co-authored-by: George Amanakis <gamanakis@gmail.com>
Co-authored-by: Jorgen Lundman <lundman@lundman.net>
andrewc12 added a commit to andrewc12/openzfs that referenced this issue Sep 23, 2022
* Avoid dirtying the final TXGs when exporting a pool

There are two codepaths than can dirty final TXGs:

1) If calling spa_export_common()->spa_unload()->
   spa_unload_log_sm_flush_all() after the spa_final_txg is set, then
   spa_sync()->spa_flush_metaslabs() may end up dirtying the final
   TXGs. Then we have the following panic:
   Call Trace:
    <TASK>
    dump_stack_lvl+0x46/0x62
    spl_panic+0xea/0x102 [spl]
    dbuf_dirty+0xcd6/0x11b0 [zfs]
    zap_lockdir_impl+0x321/0x590 [zfs]
    zap_lockdir+0xed/0x150 [zfs]
    zap_update+0x69/0x250 [zfs]
    feature_sync+0x5f/0x190 [zfs]
    space_map_alloc+0x83/0xc0 [zfs]
    spa_generate_syncing_log_sm+0x10b/0x2f0 [zfs]
    spa_flush_metaslabs+0xb2/0x350 [zfs]
    spa_sync_iterate_to_convergence+0x15a/0x320 [zfs]
    spa_sync+0x2e0/0x840 [zfs]
    txg_sync_thread+0x2b1/0x3f0 [zfs]
    thread_generic_wrapper+0x62/0xa0 [spl]
    kthread+0x127/0x150
    ret_from_fork+0x22/0x30
    </TASK>

2) Calling vdev_*_stop_all() for a second time in spa_unload() after
   spa_export_common() unnecessarily delays the final TXGs beyond what
   spa_final_txg is set at.

Fix this by performing the check and call for
spa_unload_log_sm_flush_all() before the spa_final_txg is set in
spa_export_common(). Also check if the spa_final_txg has already been
set in spa_unload() and skip those calls in this case.

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: George Amanakis <gamanakis@gmail.com>
External-issue: https://www.illumos.org/issues/9081
Closes openzfs#13048 
Closes openzfs#13098

* Add spa _os() hooks

Add hooks for when spa is created, exported, activated and
deactivated. Used by macOS to attach iokit, and lock
kext as busy (to stop unloads).

Userland, Linux, and, FreeBSD have empty stubs.

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Jorgen Lundman <lundman@lundman.net>
Closes openzfs#12801

* Windows: Add spa_os() hooks

use the common spa prototypes instead of the windows specific ones

Signed-off-by: Andrew Innes <andrew.c12@gmail.com>

Signed-off-by: Andrew Innes <andrew.c12@gmail.com>
Co-authored-by: George Amanakis <gamanakis@gmail.com>
Co-authored-by: Jorgen Lundman <lundman@lundman.net>
andrewc12 added a commit to andrewc12/openzfs that referenced this issue Sep 23, 2022
* Avoid dirtying the final TXGs when exporting a pool

There are two codepaths than can dirty final TXGs:

1) If calling spa_export_common()->spa_unload()->
   spa_unload_log_sm_flush_all() after the spa_final_txg is set, then
   spa_sync()->spa_flush_metaslabs() may end up dirtying the final
   TXGs. Then we have the following panic:
   Call Trace:
    <TASK>
    dump_stack_lvl+0x46/0x62
    spl_panic+0xea/0x102 [spl]
    dbuf_dirty+0xcd6/0x11b0 [zfs]
    zap_lockdir_impl+0x321/0x590 [zfs]
    zap_lockdir+0xed/0x150 [zfs]
    zap_update+0x69/0x250 [zfs]
    feature_sync+0x5f/0x190 [zfs]
    space_map_alloc+0x83/0xc0 [zfs]
    spa_generate_syncing_log_sm+0x10b/0x2f0 [zfs]
    spa_flush_metaslabs+0xb2/0x350 [zfs]
    spa_sync_iterate_to_convergence+0x15a/0x320 [zfs]
    spa_sync+0x2e0/0x840 [zfs]
    txg_sync_thread+0x2b1/0x3f0 [zfs]
    thread_generic_wrapper+0x62/0xa0 [spl]
    kthread+0x127/0x150
    ret_from_fork+0x22/0x30
    </TASK>

2) Calling vdev_*_stop_all() for a second time in spa_unload() after
   spa_export_common() unnecessarily delays the final TXGs beyond what
   spa_final_txg is set at.

Fix this by performing the check and call for
spa_unload_log_sm_flush_all() before the spa_final_txg is set in
spa_export_common(). Also check if the spa_final_txg has already been
set in spa_unload() and skip those calls in this case.

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: George Amanakis <gamanakis@gmail.com>
External-issue: https://www.illumos.org/issues/9081
Closes openzfs#13048 
Closes openzfs#13098

* Add spa _os() hooks

Add hooks for when spa is created, exported, activated and
deactivated. Used by macOS to attach iokit, and lock
kext as busy (to stop unloads).

Userland, Linux, and, FreeBSD have empty stubs.

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Jorgen Lundman <lundman@lundman.net>
Closes openzfs#12801

* Windows: Add spa_os() hooks

use the common spa prototypes instead of the windows specific ones

Signed-off-by: Andrew Innes <andrew.c12@gmail.com>

Signed-off-by: Andrew Innes <andrew.c12@gmail.com>
Co-authored-by: George Amanakis <gamanakis@gmail.com>
Co-authored-by: Jorgen Lundman <lundman@lundman.net>
andrewc12 added a commit to andrewc12/openzfs that referenced this issue Sep 23, 2022
* Avoid dirtying the final TXGs when exporting a pool

There are two codepaths than can dirty final TXGs:

1) If calling spa_export_common()->spa_unload()->
   spa_unload_log_sm_flush_all() after the spa_final_txg is set, then
   spa_sync()->spa_flush_metaslabs() may end up dirtying the final
   TXGs. Then we have the following panic:
   Call Trace:
    <TASK>
    dump_stack_lvl+0x46/0x62
    spl_panic+0xea/0x102 [spl]
    dbuf_dirty+0xcd6/0x11b0 [zfs]
    zap_lockdir_impl+0x321/0x590 [zfs]
    zap_lockdir+0xed/0x150 [zfs]
    zap_update+0x69/0x250 [zfs]
    feature_sync+0x5f/0x190 [zfs]
    space_map_alloc+0x83/0xc0 [zfs]
    spa_generate_syncing_log_sm+0x10b/0x2f0 [zfs]
    spa_flush_metaslabs+0xb2/0x350 [zfs]
    spa_sync_iterate_to_convergence+0x15a/0x320 [zfs]
    spa_sync+0x2e0/0x840 [zfs]
    txg_sync_thread+0x2b1/0x3f0 [zfs]
    thread_generic_wrapper+0x62/0xa0 [spl]
    kthread+0x127/0x150
    ret_from_fork+0x22/0x30
    </TASK>

2) Calling vdev_*_stop_all() for a second time in spa_unload() after
   spa_export_common() unnecessarily delays the final TXGs beyond what
   spa_final_txg is set at.

Fix this by performing the check and call for
spa_unload_log_sm_flush_all() before the spa_final_txg is set in
spa_export_common(). Also check if the spa_final_txg has already been
set in spa_unload() and skip those calls in this case.

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: George Amanakis <gamanakis@gmail.com>
External-issue: https://www.illumos.org/issues/9081
Closes openzfs#13048 
Closes openzfs#13098

* Add spa _os() hooks

Add hooks for when spa is created, exported, activated and
deactivated. Used by macOS to attach iokit, and lock
kext as busy (to stop unloads).

Userland, Linux, and, FreeBSD have empty stubs.

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Jorgen Lundman <lundman@lundman.net>
Closes openzfs#12801

* Windows: Add spa_os() hooks

use the common spa prototypes instead of the windows specific ones

Signed-off-by: Andrew Innes <andrew.c12@gmail.com>

Signed-off-by: Andrew Innes <andrew.c12@gmail.com>
Co-authored-by: George Amanakis <gamanakis@gmail.com>
Co-authored-by: Jorgen Lundman <lundman@lundman.net>
andrewc12 added a commit to andrewc12/openzfs that referenced this issue Sep 23, 2022
* Avoid dirtying the final TXGs when exporting a pool

There are two codepaths than can dirty final TXGs:

1) If calling spa_export_common()->spa_unload()->
   spa_unload_log_sm_flush_all() after the spa_final_txg is set, then
   spa_sync()->spa_flush_metaslabs() may end up dirtying the final
   TXGs. Then we have the following panic:
   Call Trace:
    <TASK>
    dump_stack_lvl+0x46/0x62
    spl_panic+0xea/0x102 [spl]
    dbuf_dirty+0xcd6/0x11b0 [zfs]
    zap_lockdir_impl+0x321/0x590 [zfs]
    zap_lockdir+0xed/0x150 [zfs]
    zap_update+0x69/0x250 [zfs]
    feature_sync+0x5f/0x190 [zfs]
    space_map_alloc+0x83/0xc0 [zfs]
    spa_generate_syncing_log_sm+0x10b/0x2f0 [zfs]
    spa_flush_metaslabs+0xb2/0x350 [zfs]
    spa_sync_iterate_to_convergence+0x15a/0x320 [zfs]
    spa_sync+0x2e0/0x840 [zfs]
    txg_sync_thread+0x2b1/0x3f0 [zfs]
    thread_generic_wrapper+0x62/0xa0 [spl]
    kthread+0x127/0x150
    ret_from_fork+0x22/0x30
    </TASK>

2) Calling vdev_*_stop_all() for a second time in spa_unload() after
   spa_export_common() unnecessarily delays the final TXGs beyond what
   spa_final_txg is set at.

Fix this by performing the check and call for
spa_unload_log_sm_flush_all() before the spa_final_txg is set in
spa_export_common(). Also check if the spa_final_txg has already been
set in spa_unload() and skip those calls in this case.

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: George Amanakis <gamanakis@gmail.com>
External-issue: https://www.illumos.org/issues/9081
Closes openzfs#13048 
Closes openzfs#13098

* Add spa _os() hooks

Add hooks for when spa is created, exported, activated and
deactivated. Used by macOS to attach iokit, and lock
kext as busy (to stop unloads).

Userland, Linux, and, FreeBSD have empty stubs.

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Jorgen Lundman <lundman@lundman.net>
Closes openzfs#12801

* Windows: Add spa_os() hooks

use the common spa prototypes instead of the windows specific ones

Signed-off-by: Andrew Innes <andrew.c12@gmail.com>

Signed-off-by: Andrew Innes <andrew.c12@gmail.com>
Co-authored-by: George Amanakis <gamanakis@gmail.com>
Co-authored-by: Jorgen Lundman <lundman@lundman.net>
andrewc12 added a commit to andrewc12/openzfs that referenced this issue Sep 23, 2022
* Avoid dirtying the final TXGs when exporting a pool

There are two codepaths than can dirty final TXGs:

1) If calling spa_export_common()->spa_unload()->
   spa_unload_log_sm_flush_all() after the spa_final_txg is set, then
   spa_sync()->spa_flush_metaslabs() may end up dirtying the final
   TXGs. Then we have the following panic:
   Call Trace:
    <TASK>
    dump_stack_lvl+0x46/0x62
    spl_panic+0xea/0x102 [spl]
    dbuf_dirty+0xcd6/0x11b0 [zfs]
    zap_lockdir_impl+0x321/0x590 [zfs]
    zap_lockdir+0xed/0x150 [zfs]
    zap_update+0x69/0x250 [zfs]
    feature_sync+0x5f/0x190 [zfs]
    space_map_alloc+0x83/0xc0 [zfs]
    spa_generate_syncing_log_sm+0x10b/0x2f0 [zfs]
    spa_flush_metaslabs+0xb2/0x350 [zfs]
    spa_sync_iterate_to_convergence+0x15a/0x320 [zfs]
    spa_sync+0x2e0/0x840 [zfs]
    txg_sync_thread+0x2b1/0x3f0 [zfs]
    thread_generic_wrapper+0x62/0xa0 [spl]
    kthread+0x127/0x150
    ret_from_fork+0x22/0x30
    </TASK>

2) Calling vdev_*_stop_all() for a second time in spa_unload() after
   spa_export_common() unnecessarily delays the final TXGs beyond what
   spa_final_txg is set at.

Fix this by performing the check and call for
spa_unload_log_sm_flush_all() before the spa_final_txg is set in
spa_export_common(). Also check if the spa_final_txg has already been
set in spa_unload() and skip those calls in this case.

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: George Amanakis <gamanakis@gmail.com>
External-issue: https://www.illumos.org/issues/9081
Closes openzfs#13048 
Closes openzfs#13098

* Add spa _os() hooks

Add hooks for when spa is created, exported, activated and
deactivated. Used by macOS to attach iokit, and lock
kext as busy (to stop unloads).

Userland, Linux, and, FreeBSD have empty stubs.

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Jorgen Lundman <lundman@lundman.net>
Closes openzfs#12801

* Windows: Add spa_os() hooks

use the common spa prototypes instead of the windows specific ones

Signed-off-by: Andrew Innes <andrew.c12@gmail.com>

Signed-off-by: Andrew Innes <andrew.c12@gmail.com>
Co-authored-by: George Amanakis <gamanakis@gmail.com>
Co-authored-by: Jorgen Lundman <lundman@lundman.net>
andrewc12 added a commit to andrewc12/openzfs that referenced this issue Oct 1, 2022
* Avoid dirtying the final TXGs when exporting a pool

There are two codepaths than can dirty final TXGs:

1) If calling spa_export_common()->spa_unload()->
   spa_unload_log_sm_flush_all() after the spa_final_txg is set, then
   spa_sync()->spa_flush_metaslabs() may end up dirtying the final
   TXGs. Then we have the following panic:
   Call Trace:
    <TASK>
    dump_stack_lvl+0x46/0x62
    spl_panic+0xea/0x102 [spl]
    dbuf_dirty+0xcd6/0x11b0 [zfs]
    zap_lockdir_impl+0x321/0x590 [zfs]
    zap_lockdir+0xed/0x150 [zfs]
    zap_update+0x69/0x250 [zfs]
    feature_sync+0x5f/0x190 [zfs]
    space_map_alloc+0x83/0xc0 [zfs]
    spa_generate_syncing_log_sm+0x10b/0x2f0 [zfs]
    spa_flush_metaslabs+0xb2/0x350 [zfs]
    spa_sync_iterate_to_convergence+0x15a/0x320 [zfs]
    spa_sync+0x2e0/0x840 [zfs]
    txg_sync_thread+0x2b1/0x3f0 [zfs]
    thread_generic_wrapper+0x62/0xa0 [spl]
    kthread+0x127/0x150
    ret_from_fork+0x22/0x30
    </TASK>

2) Calling vdev_*_stop_all() for a second time in spa_unload() after
   spa_export_common() unnecessarily delays the final TXGs beyond what
   spa_final_txg is set at.

Fix this by performing the check and call for
spa_unload_log_sm_flush_all() before the spa_final_txg is set in
spa_export_common(). Also check if the spa_final_txg has already been
set in spa_unload() and skip those calls in this case.

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: George Amanakis <gamanakis@gmail.com>
External-issue: https://www.illumos.org/issues/9081
Closes openzfs#13048 
Closes openzfs#13098

* Add spa _os() hooks

Add hooks for when spa is created, exported, activated and
deactivated. Used by macOS to attach iokit, and lock
kext as busy (to stop unloads).

Userland, Linux, and, FreeBSD have empty stubs.

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Jorgen Lundman <lundman@lundman.net>
Closes openzfs#12801

* Windows: Add spa_os() hooks

use the common spa prototypes instead of the windows specific ones

Signed-off-by: Andrew Innes <andrew.c12@gmail.com>

Signed-off-by: Andrew Innes <andrew.c12@gmail.com>
Co-authored-by: George Amanakis <gamanakis@gmail.com>
Co-authored-by: Jorgen Lundman <lundman@lundman.net>
andrewc12 added a commit to andrewc12/openzfs that referenced this issue Oct 1, 2022
* Avoid dirtying the final TXGs when exporting a pool

There are two codepaths than can dirty final TXGs:

1) If calling spa_export_common()->spa_unload()->
   spa_unload_log_sm_flush_all() after the spa_final_txg is set, then
   spa_sync()->spa_flush_metaslabs() may end up dirtying the final
   TXGs. Then we have the following panic:
   Call Trace:
    <TASK>
    dump_stack_lvl+0x46/0x62
    spl_panic+0xea/0x102 [spl]
    dbuf_dirty+0xcd6/0x11b0 [zfs]
    zap_lockdir_impl+0x321/0x590 [zfs]
    zap_lockdir+0xed/0x150 [zfs]
    zap_update+0x69/0x250 [zfs]
    feature_sync+0x5f/0x190 [zfs]
    space_map_alloc+0x83/0xc0 [zfs]
    spa_generate_syncing_log_sm+0x10b/0x2f0 [zfs]
    spa_flush_metaslabs+0xb2/0x350 [zfs]
    spa_sync_iterate_to_convergence+0x15a/0x320 [zfs]
    spa_sync+0x2e0/0x840 [zfs]
    txg_sync_thread+0x2b1/0x3f0 [zfs]
    thread_generic_wrapper+0x62/0xa0 [spl]
    kthread+0x127/0x150
    ret_from_fork+0x22/0x30
    </TASK>

2) Calling vdev_*_stop_all() for a second time in spa_unload() after
   spa_export_common() unnecessarily delays the final TXGs beyond what
   spa_final_txg is set at.

Fix this by performing the check and call for
spa_unload_log_sm_flush_all() before the spa_final_txg is set in
spa_export_common(). Also check if the spa_final_txg has already been
set in spa_unload() and skip those calls in this case.

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: George Amanakis <gamanakis@gmail.com>
External-issue: https://www.illumos.org/issues/9081
Closes openzfs#13048 
Closes openzfs#13098

* Add spa _os() hooks

Add hooks for when spa is created, exported, activated and
deactivated. Used by macOS to attach iokit, and lock
kext as busy (to stop unloads).

Userland, Linux, and, FreeBSD have empty stubs.

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Jorgen Lundman <lundman@lundman.net>
Closes openzfs#12801

* Windows: Add spa_os() hooks

use the common spa prototypes instead of the windows specific ones

Signed-off-by: Andrew Innes <andrew.c12@gmail.com>

Signed-off-by: Andrew Innes <andrew.c12@gmail.com>
Co-authored-by: George Amanakis <gamanakis@gmail.com>
Co-authored-by: Jorgen Lundman <lundman@lundman.net>
andrewc12 added a commit to andrewc12/openzfs that referenced this issue Oct 1, 2022
* Avoid dirtying the final TXGs when exporting a pool

There are two codepaths than can dirty final TXGs:

1) If calling spa_export_common()->spa_unload()->
   spa_unload_log_sm_flush_all() after the spa_final_txg is set, then
   spa_sync()->spa_flush_metaslabs() may end up dirtying the final
   TXGs. Then we have the following panic:
   Call Trace:
    <TASK>
    dump_stack_lvl+0x46/0x62
    spl_panic+0xea/0x102 [spl]
    dbuf_dirty+0xcd6/0x11b0 [zfs]
    zap_lockdir_impl+0x321/0x590 [zfs]
    zap_lockdir+0xed/0x150 [zfs]
    zap_update+0x69/0x250 [zfs]
    feature_sync+0x5f/0x190 [zfs]
    space_map_alloc+0x83/0xc0 [zfs]
    spa_generate_syncing_log_sm+0x10b/0x2f0 [zfs]
    spa_flush_metaslabs+0xb2/0x350 [zfs]
    spa_sync_iterate_to_convergence+0x15a/0x320 [zfs]
    spa_sync+0x2e0/0x840 [zfs]
    txg_sync_thread+0x2b1/0x3f0 [zfs]
    thread_generic_wrapper+0x62/0xa0 [spl]
    kthread+0x127/0x150
    ret_from_fork+0x22/0x30
    </TASK>

2) Calling vdev_*_stop_all() for a second time in spa_unload() after
   spa_export_common() unnecessarily delays the final TXGs beyond what
   spa_final_txg is set at.

Fix this by performing the check and call for
spa_unload_log_sm_flush_all() before the spa_final_txg is set in
spa_export_common(). Also check if the spa_final_txg has already been
set in spa_unload() and skip those calls in this case.

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: George Amanakis <gamanakis@gmail.com>
External-issue: https://www.illumos.org/issues/9081
Closes openzfs#13048 
Closes openzfs#13098

* Add spa _os() hooks

Add hooks for when spa is created, exported, activated and
deactivated. Used by macOS to attach iokit, and lock
kext as busy (to stop unloads).

Userland, Linux, and, FreeBSD have empty stubs.

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Jorgen Lundman <lundman@lundman.net>
Closes openzfs#12801

* Windows: Add spa_os() hooks

use the common spa prototypes instead of the windows specific ones

Signed-off-by: Andrew Innes <andrew.c12@gmail.com>

Signed-off-by: Andrew Innes <andrew.c12@gmail.com>
Co-authored-by: George Amanakis <gamanakis@gmail.com>
Co-authored-by: Jorgen Lundman <lundman@lundman.net>
andrewc12 added a commit to andrewc12/openzfs that referenced this issue Oct 1, 2022
* Avoid dirtying the final TXGs when exporting a pool

There are two codepaths than can dirty final TXGs:

1) If calling spa_export_common()->spa_unload()->
   spa_unload_log_sm_flush_all() after the spa_final_txg is set, then
   spa_sync()->spa_flush_metaslabs() may end up dirtying the final
   TXGs. Then we have the following panic:
   Call Trace:
    <TASK>
    dump_stack_lvl+0x46/0x62
    spl_panic+0xea/0x102 [spl]
    dbuf_dirty+0xcd6/0x11b0 [zfs]
    zap_lockdir_impl+0x321/0x590 [zfs]
    zap_lockdir+0xed/0x150 [zfs]
    zap_update+0x69/0x250 [zfs]
    feature_sync+0x5f/0x190 [zfs]
    space_map_alloc+0x83/0xc0 [zfs]
    spa_generate_syncing_log_sm+0x10b/0x2f0 [zfs]
    spa_flush_metaslabs+0xb2/0x350 [zfs]
    spa_sync_iterate_to_convergence+0x15a/0x320 [zfs]
    spa_sync+0x2e0/0x840 [zfs]
    txg_sync_thread+0x2b1/0x3f0 [zfs]
    thread_generic_wrapper+0x62/0xa0 [spl]
    kthread+0x127/0x150
    ret_from_fork+0x22/0x30
    </TASK>

2) Calling vdev_*_stop_all() for a second time in spa_unload() after
   spa_export_common() unnecessarily delays the final TXGs beyond what
   spa_final_txg is set at.

Fix this by performing the check and call for
spa_unload_log_sm_flush_all() before the spa_final_txg is set in
spa_export_common(). Also check if the spa_final_txg has already been
set in spa_unload() and skip those calls in this case.

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: George Amanakis <gamanakis@gmail.com>
External-issue: https://www.illumos.org/issues/9081
Closes openzfs#13048 
Closes openzfs#13098

* Add spa _os() hooks

Add hooks for when spa is created, exported, activated and
deactivated. Used by macOS to attach iokit, and lock
kext as busy (to stop unloads).

Userland, Linux, and, FreeBSD have empty stubs.

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Jorgen Lundman <lundman@lundman.net>
Closes openzfs#12801

* Windows: Add spa_os() hooks

use the common spa prototypes instead of the windows specific ones

Signed-off-by: Andrew Innes <andrew.c12@gmail.com>

Signed-off-by: Andrew Innes <andrew.c12@gmail.com>
Co-authored-by: George Amanakis <gamanakis@gmail.com>
Co-authored-by: Jorgen Lundman <lundman@lundman.net>
andrewc12 added a commit to andrewc12/openzfs that referenced this issue Oct 1, 2022
* Avoid dirtying the final TXGs when exporting a pool

There are two codepaths than can dirty final TXGs:

1) If calling spa_export_common()->spa_unload()->
   spa_unload_log_sm_flush_all() after the spa_final_txg is set, then
   spa_sync()->spa_flush_metaslabs() may end up dirtying the final
   TXGs. Then we have the following panic:
   Call Trace:
    <TASK>
    dump_stack_lvl+0x46/0x62
    spl_panic+0xea/0x102 [spl]
    dbuf_dirty+0xcd6/0x11b0 [zfs]
    zap_lockdir_impl+0x321/0x590 [zfs]
    zap_lockdir+0xed/0x150 [zfs]
    zap_update+0x69/0x250 [zfs]
    feature_sync+0x5f/0x190 [zfs]
    space_map_alloc+0x83/0xc0 [zfs]
    spa_generate_syncing_log_sm+0x10b/0x2f0 [zfs]
    spa_flush_metaslabs+0xb2/0x350 [zfs]
    spa_sync_iterate_to_convergence+0x15a/0x320 [zfs]
    spa_sync+0x2e0/0x840 [zfs]
    txg_sync_thread+0x2b1/0x3f0 [zfs]
    thread_generic_wrapper+0x62/0xa0 [spl]
    kthread+0x127/0x150
    ret_from_fork+0x22/0x30
    </TASK>

2) Calling vdev_*_stop_all() for a second time in spa_unload() after
   spa_export_common() unnecessarily delays the final TXGs beyond what
   spa_final_txg is set at.

Fix this by performing the check and call for
spa_unload_log_sm_flush_all() before the spa_final_txg is set in
spa_export_common(). Also check if the spa_final_txg has already been
set in spa_unload() and skip those calls in this case.

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: George Amanakis <gamanakis@gmail.com>
External-issue: https://www.illumos.org/issues/9081
Closes openzfs#13048 
Closes openzfs#13098

* Add spa _os() hooks

Add hooks for when spa is created, exported, activated and
deactivated. Used by macOS to attach iokit, and lock
kext as busy (to stop unloads).

Userland, Linux, and, FreeBSD have empty stubs.

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Jorgen Lundman <lundman@lundman.net>
Closes openzfs#12801

* Windows: Add spa_os() hooks

use the common spa prototypes instead of the windows specific ones

Signed-off-by: Andrew Innes <andrew.c12@gmail.com>

Signed-off-by: Andrew Innes <andrew.c12@gmail.com>
Co-authored-by: George Amanakis <gamanakis@gmail.com>
Co-authored-by: Jorgen Lundman <lundman@lundman.net>
andrewc12 added a commit to andrewc12/openzfs that referenced this issue Oct 1, 2022
* Avoid dirtying the final TXGs when exporting a pool

There are two codepaths than can dirty final TXGs:

1) If calling spa_export_common()->spa_unload()->
   spa_unload_log_sm_flush_all() after the spa_final_txg is set, then
   spa_sync()->spa_flush_metaslabs() may end up dirtying the final
   TXGs. Then we have the following panic:
   Call Trace:
    <TASK>
    dump_stack_lvl+0x46/0x62
    spl_panic+0xea/0x102 [spl]
    dbuf_dirty+0xcd6/0x11b0 [zfs]
    zap_lockdir_impl+0x321/0x590 [zfs]
    zap_lockdir+0xed/0x150 [zfs]
    zap_update+0x69/0x250 [zfs]
    feature_sync+0x5f/0x190 [zfs]
    space_map_alloc+0x83/0xc0 [zfs]
    spa_generate_syncing_log_sm+0x10b/0x2f0 [zfs]
    spa_flush_metaslabs+0xb2/0x350 [zfs]
    spa_sync_iterate_to_convergence+0x15a/0x320 [zfs]
    spa_sync+0x2e0/0x840 [zfs]
    txg_sync_thread+0x2b1/0x3f0 [zfs]
    thread_generic_wrapper+0x62/0xa0 [spl]
    kthread+0x127/0x150
    ret_from_fork+0x22/0x30
    </TASK>

2) Calling vdev_*_stop_all() for a second time in spa_unload() after
   spa_export_common() unnecessarily delays the final TXGs beyond what
   spa_final_txg is set at.

Fix this by performing the check and call for
spa_unload_log_sm_flush_all() before the spa_final_txg is set in
spa_export_common(). Also check if the spa_final_txg has already been
set in spa_unload() and skip those calls in this case.

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: George Amanakis <gamanakis@gmail.com>
External-issue: https://www.illumos.org/issues/9081
Closes openzfs#13048 
Closes openzfs#13098

* Add spa _os() hooks

Add hooks for when spa is created, exported, activated and
deactivated. Used by macOS to attach iokit, and lock
kext as busy (to stop unloads).

Userland, Linux, and, FreeBSD have empty stubs.

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Jorgen Lundman <lundman@lundman.net>
Closes openzfs#12801

* Windows: Add spa_os() hooks

use the common spa prototypes instead of the windows specific ones

Signed-off-by: Andrew Innes <andrew.c12@gmail.com>

Signed-off-by: Andrew Innes <andrew.c12@gmail.com>
Co-authored-by: George Amanakis <gamanakis@gmail.com>
Co-authored-by: Jorgen Lundman <lundman@lundman.net>
andrewc12 added a commit to andrewc12/openzfs that referenced this issue Oct 1, 2022
* Avoid dirtying the final TXGs when exporting a pool

There are two codepaths than can dirty final TXGs:

1) If calling spa_export_common()->spa_unload()->
   spa_unload_log_sm_flush_all() after the spa_final_txg is set, then
   spa_sync()->spa_flush_metaslabs() may end up dirtying the final
   TXGs. Then we have the following panic:
   Call Trace:
    <TASK>
    dump_stack_lvl+0x46/0x62
    spl_panic+0xea/0x102 [spl]
    dbuf_dirty+0xcd6/0x11b0 [zfs]
    zap_lockdir_impl+0x321/0x590 [zfs]
    zap_lockdir+0xed/0x150 [zfs]
    zap_update+0x69/0x250 [zfs]
    feature_sync+0x5f/0x190 [zfs]
    space_map_alloc+0x83/0xc0 [zfs]
    spa_generate_syncing_log_sm+0x10b/0x2f0 [zfs]
    spa_flush_metaslabs+0xb2/0x350 [zfs]
    spa_sync_iterate_to_convergence+0x15a/0x320 [zfs]
    spa_sync+0x2e0/0x840 [zfs]
    txg_sync_thread+0x2b1/0x3f0 [zfs]
    thread_generic_wrapper+0x62/0xa0 [spl]
    kthread+0x127/0x150
    ret_from_fork+0x22/0x30
    </TASK>

2) Calling vdev_*_stop_all() for a second time in spa_unload() after
   spa_export_common() unnecessarily delays the final TXGs beyond what
   spa_final_txg is set at.

Fix this by performing the check and call for
spa_unload_log_sm_flush_all() before the spa_final_txg is set in
spa_export_common(). Also check if the spa_final_txg has already been
set in spa_unload() and skip those calls in this case.

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: George Amanakis <gamanakis@gmail.com>
External-issue: https://www.illumos.org/issues/9081
Closes openzfs#13048 
Closes openzfs#13098

* Add spa _os() hooks

Add hooks for when spa is created, exported, activated and
deactivated. Used by macOS to attach iokit, and lock
kext as busy (to stop unloads).

Userland, Linux, and, FreeBSD have empty stubs.

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Jorgen Lundman <lundman@lundman.net>
Closes openzfs#12801

* Windows: Add spa_os() hooks

use the common spa prototypes instead of the windows specific ones

Signed-off-by: Andrew Innes <andrew.c12@gmail.com>

Signed-off-by: Andrew Innes <andrew.c12@gmail.com>
Co-authored-by: George Amanakis <gamanakis@gmail.com>
Co-authored-by: Jorgen Lundman <lundman@lundman.net>
andrewc12 added a commit to andrewc12/openzfs that referenced this issue Oct 1, 2022
* Avoid dirtying the final TXGs when exporting a pool

There are two codepaths than can dirty final TXGs:

1) If calling spa_export_common()->spa_unload()->
   spa_unload_log_sm_flush_all() after the spa_final_txg is set, then
   spa_sync()->spa_flush_metaslabs() may end up dirtying the final
   TXGs. Then we have the following panic:
   Call Trace:
    <TASK>
    dump_stack_lvl+0x46/0x62
    spl_panic+0xea/0x102 [spl]
    dbuf_dirty+0xcd6/0x11b0 [zfs]
    zap_lockdir_impl+0x321/0x590 [zfs]
    zap_lockdir+0xed/0x150 [zfs]
    zap_update+0x69/0x250 [zfs]
    feature_sync+0x5f/0x190 [zfs]
    space_map_alloc+0x83/0xc0 [zfs]
    spa_generate_syncing_log_sm+0x10b/0x2f0 [zfs]
    spa_flush_metaslabs+0xb2/0x350 [zfs]
    spa_sync_iterate_to_convergence+0x15a/0x320 [zfs]
    spa_sync+0x2e0/0x840 [zfs]
    txg_sync_thread+0x2b1/0x3f0 [zfs]
    thread_generic_wrapper+0x62/0xa0 [spl]
    kthread+0x127/0x150
    ret_from_fork+0x22/0x30
    </TASK>

2) Calling vdev_*_stop_all() for a second time in spa_unload() after
   spa_export_common() unnecessarily delays the final TXGs beyond what
   spa_final_txg is set at.

Fix this by performing the check and call for
spa_unload_log_sm_flush_all() before the spa_final_txg is set in
spa_export_common(). Also check if the spa_final_txg has already been
set in spa_unload() and skip those calls in this case.

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: George Amanakis <gamanakis@gmail.com>
External-issue: https://www.illumos.org/issues/9081
Closes openzfs#13048 
Closes openzfs#13098

* Add spa _os() hooks

Add hooks for when spa is created, exported, activated and
deactivated. Used by macOS to attach iokit, and lock
kext as busy (to stop unloads).

Userland, Linux, and, FreeBSD have empty stubs.

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Jorgen Lundman <lundman@lundman.net>
Closes openzfs#12801

* Windows: Add spa_os() hooks

use the common spa prototypes instead of the windows specific ones

Signed-off-by: Andrew Innes <andrew.c12@gmail.com>

Signed-off-by: Andrew Innes <andrew.c12@gmail.com>
Co-authored-by: George Amanakis <gamanakis@gmail.com>
Co-authored-by: Jorgen Lundman <lundman@lundman.net>
andrewc12 added a commit to andrewc12/openzfs that referenced this issue Oct 1, 2022
* Avoid dirtying the final TXGs when exporting a pool

There are two codepaths than can dirty final TXGs:

1) If calling spa_export_common()->spa_unload()->
   spa_unload_log_sm_flush_all() after the spa_final_txg is set, then
   spa_sync()->spa_flush_metaslabs() may end up dirtying the final
   TXGs. Then we have the following panic:
   Call Trace:
    <TASK>
    dump_stack_lvl+0x46/0x62
    spl_panic+0xea/0x102 [spl]
    dbuf_dirty+0xcd6/0x11b0 [zfs]
    zap_lockdir_impl+0x321/0x590 [zfs]
    zap_lockdir+0xed/0x150 [zfs]
    zap_update+0x69/0x250 [zfs]
    feature_sync+0x5f/0x190 [zfs]
    space_map_alloc+0x83/0xc0 [zfs]
    spa_generate_syncing_log_sm+0x10b/0x2f0 [zfs]
    spa_flush_metaslabs+0xb2/0x350 [zfs]
    spa_sync_iterate_to_convergence+0x15a/0x320 [zfs]
    spa_sync+0x2e0/0x840 [zfs]
    txg_sync_thread+0x2b1/0x3f0 [zfs]
    thread_generic_wrapper+0x62/0xa0 [spl]
    kthread+0x127/0x150
    ret_from_fork+0x22/0x30
    </TASK>

2) Calling vdev_*_stop_all() for a second time in spa_unload() after
   spa_export_common() unnecessarily delays the final TXGs beyond what
   spa_final_txg is set at.

Fix this by performing the check and call for
spa_unload_log_sm_flush_all() before the spa_final_txg is set in
spa_export_common(). Also check if the spa_final_txg has already been
set in spa_unload() and skip those calls in this case.

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: George Amanakis <gamanakis@gmail.com>
External-issue: https://www.illumos.org/issues/9081
Closes openzfs#13048 
Closes openzfs#13098

* Add spa _os() hooks

Add hooks for when spa is created, exported, activated and
deactivated. Used by macOS to attach iokit, and lock
kext as busy (to stop unloads).

Userland, Linux, and, FreeBSD have empty stubs.

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Jorgen Lundman <lundman@lundman.net>
Closes openzfs#12801

* Windows: Add spa_os() hooks

use the common spa prototypes instead of the windows specific ones

Signed-off-by: Andrew Innes <andrew.c12@gmail.com>

Signed-off-by: Andrew Innes <andrew.c12@gmail.com>
Co-authored-by: George Amanakis <gamanakis@gmail.com>
Co-authored-by: Jorgen Lundman <lundman@lundman.net>
andrewc12 added a commit to andrewc12/openzfs that referenced this issue Oct 1, 2022
* Avoid dirtying the final TXGs when exporting a pool

There are two codepaths than can dirty final TXGs:

1) If calling spa_export_common()->spa_unload()->
   spa_unload_log_sm_flush_all() after the spa_final_txg is set, then
   spa_sync()->spa_flush_metaslabs() may end up dirtying the final
   TXGs. Then we have the following panic:
   Call Trace:
    <TASK>
    dump_stack_lvl+0x46/0x62
    spl_panic+0xea/0x102 [spl]
    dbuf_dirty+0xcd6/0x11b0 [zfs]
    zap_lockdir_impl+0x321/0x590 [zfs]
    zap_lockdir+0xed/0x150 [zfs]
    zap_update+0x69/0x250 [zfs]
    feature_sync+0x5f/0x190 [zfs]
    space_map_alloc+0x83/0xc0 [zfs]
    spa_generate_syncing_log_sm+0x10b/0x2f0 [zfs]
    spa_flush_metaslabs+0xb2/0x350 [zfs]
    spa_sync_iterate_to_convergence+0x15a/0x320 [zfs]
    spa_sync+0x2e0/0x840 [zfs]
    txg_sync_thread+0x2b1/0x3f0 [zfs]
    thread_generic_wrapper+0x62/0xa0 [spl]
    kthread+0x127/0x150
    ret_from_fork+0x22/0x30
    </TASK>

2) Calling vdev_*_stop_all() for a second time in spa_unload() after
   spa_export_common() unnecessarily delays the final TXGs beyond what
   spa_final_txg is set at.

Fix this by performing the check and call for
spa_unload_log_sm_flush_all() before the spa_final_txg is set in
spa_export_common(). Also check if the spa_final_txg has already been
set in spa_unload() and skip those calls in this case.

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: George Amanakis <gamanakis@gmail.com>
External-issue: https://www.illumos.org/issues/9081
Closes openzfs#13048 
Closes openzfs#13098

* Add spa _os() hooks

Add hooks for when spa is created, exported, activated and
deactivated. Used by macOS to attach iokit, and lock
kext as busy (to stop unloads).

Userland, Linux, and, FreeBSD have empty stubs.

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Jorgen Lundman <lundman@lundman.net>
Closes openzfs#12801

* Windows: Add spa_os() hooks

use the common spa prototypes instead of the windows specific ones

Signed-off-by: Andrew Innes <andrew.c12@gmail.com>

Signed-off-by: Andrew Innes <andrew.c12@gmail.com>
Co-authored-by: George Amanakis <gamanakis@gmail.com>
Co-authored-by: Jorgen Lundman <lundman@lundman.net>
andrewc12 added a commit to andrewc12/openzfs that referenced this issue Oct 2, 2022
* Avoid dirtying the final TXGs when exporting a pool

There are two codepaths than can dirty final TXGs:

1) If calling spa_export_common()->spa_unload()->
   spa_unload_log_sm_flush_all() after the spa_final_txg is set, then
   spa_sync()->spa_flush_metaslabs() may end up dirtying the final
   TXGs. Then we have the following panic:
   Call Trace:
    <TASK>
    dump_stack_lvl+0x46/0x62
    spl_panic+0xea/0x102 [spl]
    dbuf_dirty+0xcd6/0x11b0 [zfs]
    zap_lockdir_impl+0x321/0x590 [zfs]
    zap_lockdir+0xed/0x150 [zfs]
    zap_update+0x69/0x250 [zfs]
    feature_sync+0x5f/0x190 [zfs]
    space_map_alloc+0x83/0xc0 [zfs]
    spa_generate_syncing_log_sm+0x10b/0x2f0 [zfs]
    spa_flush_metaslabs+0xb2/0x350 [zfs]
    spa_sync_iterate_to_convergence+0x15a/0x320 [zfs]
    spa_sync+0x2e0/0x840 [zfs]
    txg_sync_thread+0x2b1/0x3f0 [zfs]
    thread_generic_wrapper+0x62/0xa0 [spl]
    kthread+0x127/0x150
    ret_from_fork+0x22/0x30
    </TASK>

2) Calling vdev_*_stop_all() for a second time in spa_unload() after
   spa_export_common() unnecessarily delays the final TXGs beyond what
   spa_final_txg is set at.

Fix this by performing the check and call for
spa_unload_log_sm_flush_all() before the spa_final_txg is set in
spa_export_common(). Also check if the spa_final_txg has already been
set in spa_unload() and skip those calls in this case.

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: George Amanakis <gamanakis@gmail.com>
External-issue: https://www.illumos.org/issues/9081
Closes openzfs#13048 
Closes openzfs#13098

* Add spa _os() hooks

Add hooks for when spa is created, exported, activated and
deactivated. Used by macOS to attach iokit, and lock
kext as busy (to stop unloads).

Userland, Linux, and, FreeBSD have empty stubs.

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Jorgen Lundman <lundman@lundman.net>
Closes openzfs#12801

* Windows: Add spa_os() hooks

use the common spa prototypes instead of the windows specific ones

Signed-off-by: Andrew Innes <andrew.c12@gmail.com>

Signed-off-by: Andrew Innes <andrew.c12@gmail.com>
Co-authored-by: George Amanakis <gamanakis@gmail.com>
Co-authored-by: Jorgen Lundman <lundman@lundman.net>
andrewc12 added a commit to andrewc12/openzfs that referenced this issue Oct 6, 2022
* Avoid dirtying the final TXGs when exporting a pool

There are two codepaths than can dirty final TXGs:

1) If calling spa_export_common()->spa_unload()->
   spa_unload_log_sm_flush_all() after the spa_final_txg is set, then
   spa_sync()->spa_flush_metaslabs() may end up dirtying the final
   TXGs. Then we have the following panic:
   Call Trace:
    <TASK>
    dump_stack_lvl+0x46/0x62
    spl_panic+0xea/0x102 [spl]
    dbuf_dirty+0xcd6/0x11b0 [zfs]
    zap_lockdir_impl+0x321/0x590 [zfs]
    zap_lockdir+0xed/0x150 [zfs]
    zap_update+0x69/0x250 [zfs]
    feature_sync+0x5f/0x190 [zfs]
    space_map_alloc+0x83/0xc0 [zfs]
    spa_generate_syncing_log_sm+0x10b/0x2f0 [zfs]
    spa_flush_metaslabs+0xb2/0x350 [zfs]
    spa_sync_iterate_to_convergence+0x15a/0x320 [zfs]
    spa_sync+0x2e0/0x840 [zfs]
    txg_sync_thread+0x2b1/0x3f0 [zfs]
    thread_generic_wrapper+0x62/0xa0 [spl]
    kthread+0x127/0x150
    ret_from_fork+0x22/0x30
    </TASK>

2) Calling vdev_*_stop_all() for a second time in spa_unload() after
   spa_export_common() unnecessarily delays the final TXGs beyond what
   spa_final_txg is set at.

Fix this by performing the check and call for
spa_unload_log_sm_flush_all() before the spa_final_txg is set in
spa_export_common(). Also check if the spa_final_txg has already been
set in spa_unload() and skip those calls in this case.

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: George Amanakis <gamanakis@gmail.com>
External-issue: https://www.illumos.org/issues/9081
Closes openzfs#13048 
Closes openzfs#13098

* Add spa _os() hooks

Add hooks for when spa is created, exported, activated and
deactivated. Used by macOS to attach iokit, and lock
kext as busy (to stop unloads).

Userland, Linux, and, FreeBSD have empty stubs.

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Jorgen Lundman <lundman@lundman.net>
Closes openzfs#12801

* Windows: Add spa_os() hooks

use the common spa prototypes instead of the windows specific ones

Signed-off-by: Andrew Innes <andrew.c12@gmail.com>

Signed-off-by: Andrew Innes <andrew.c12@gmail.com>
Co-authored-by: George Amanakis <gamanakis@gmail.com>
Co-authored-by: Jorgen Lundman <lundman@lundman.net>
andrewc12 added a commit to andrewc12/openzfs that referenced this issue Oct 6, 2022
* Avoid dirtying the final TXGs when exporting a pool

There are two codepaths than can dirty final TXGs:

1) If calling spa_export_common()->spa_unload()->
   spa_unload_log_sm_flush_all() after the spa_final_txg is set, then
   spa_sync()->spa_flush_metaslabs() may end up dirtying the final
   TXGs. Then we have the following panic:
   Call Trace:
    <TASK>
    dump_stack_lvl+0x46/0x62
    spl_panic+0xea/0x102 [spl]
    dbuf_dirty+0xcd6/0x11b0 [zfs]
    zap_lockdir_impl+0x321/0x590 [zfs]
    zap_lockdir+0xed/0x150 [zfs]
    zap_update+0x69/0x250 [zfs]
    feature_sync+0x5f/0x190 [zfs]
    space_map_alloc+0x83/0xc0 [zfs]
    spa_generate_syncing_log_sm+0x10b/0x2f0 [zfs]
    spa_flush_metaslabs+0xb2/0x350 [zfs]
    spa_sync_iterate_to_convergence+0x15a/0x320 [zfs]
    spa_sync+0x2e0/0x840 [zfs]
    txg_sync_thread+0x2b1/0x3f0 [zfs]
    thread_generic_wrapper+0x62/0xa0 [spl]
    kthread+0x127/0x150
    ret_from_fork+0x22/0x30
    </TASK>

2) Calling vdev_*_stop_all() for a second time in spa_unload() after
   spa_export_common() unnecessarily delays the final TXGs beyond what
   spa_final_txg is set at.

Fix this by performing the check and call for
spa_unload_log_sm_flush_all() before the spa_final_txg is set in
spa_export_common(). Also check if the spa_final_txg has already been
set in spa_unload() and skip those calls in this case.

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: George Amanakis <gamanakis@gmail.com>
External-issue: https://www.illumos.org/issues/9081
Closes openzfs#13048 
Closes openzfs#13098

* Add spa _os() hooks

Add hooks for when spa is created, exported, activated and
deactivated. Used by macOS to attach iokit, and lock
kext as busy (to stop unloads).

Userland, Linux, and, FreeBSD have empty stubs.

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Jorgen Lundman <lundman@lundman.net>
Closes openzfs#12801

* Windows: Add spa_os() hooks

use the common spa prototypes instead of the windows specific ones

Signed-off-by: Andrew Innes <andrew.c12@gmail.com>

Signed-off-by: Andrew Innes <andrew.c12@gmail.com>
Co-authored-by: George Amanakis <gamanakis@gmail.com>
Co-authored-by: Jorgen Lundman <lundman@lundman.net>
andrewc12 added a commit to andrewc12/openzfs that referenced this issue Oct 6, 2022
* Avoid dirtying the final TXGs when exporting a pool

There are two codepaths than can dirty final TXGs:

1) If calling spa_export_common()->spa_unload()->
   spa_unload_log_sm_flush_all() after the spa_final_txg is set, then
   spa_sync()->spa_flush_metaslabs() may end up dirtying the final
   TXGs. Then we have the following panic:
   Call Trace:
    <TASK>
    dump_stack_lvl+0x46/0x62
    spl_panic+0xea/0x102 [spl]
    dbuf_dirty+0xcd6/0x11b0 [zfs]
    zap_lockdir_impl+0x321/0x590 [zfs]
    zap_lockdir+0xed/0x150 [zfs]
    zap_update+0x69/0x250 [zfs]
    feature_sync+0x5f/0x190 [zfs]
    space_map_alloc+0x83/0xc0 [zfs]
    spa_generate_syncing_log_sm+0x10b/0x2f0 [zfs]
    spa_flush_metaslabs+0xb2/0x350 [zfs]
    spa_sync_iterate_to_convergence+0x15a/0x320 [zfs]
    spa_sync+0x2e0/0x840 [zfs]
    txg_sync_thread+0x2b1/0x3f0 [zfs]
    thread_generic_wrapper+0x62/0xa0 [spl]
    kthread+0x127/0x150
    ret_from_fork+0x22/0x30
    </TASK>

2) Calling vdev_*_stop_all() for a second time in spa_unload() after
   spa_export_common() unnecessarily delays the final TXGs beyond what
   spa_final_txg is set at.

Fix this by performing the check and call for
spa_unload_log_sm_flush_all() before the spa_final_txg is set in
spa_export_common(). Also check if the spa_final_txg has already been
set in spa_unload() and skip those calls in this case.

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: George Amanakis <gamanakis@gmail.com>
External-issue: https://www.illumos.org/issues/9081
Closes openzfs#13048 
Closes openzfs#13098

* Add spa _os() hooks

Add hooks for when spa is created, exported, activated and
deactivated. Used by macOS to attach iokit, and lock
kext as busy (to stop unloads).

Userland, Linux, and, FreeBSD have empty stubs.

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Jorgen Lundman <lundman@lundman.net>
Closes openzfs#12801

* Windows: Add spa_os() hooks

use the common spa prototypes instead of the windows specific ones

Signed-off-by: Andrew Innes <andrew.c12@gmail.com>

Signed-off-by: Andrew Innes <andrew.c12@gmail.com>
Co-authored-by: George Amanakis <gamanakis@gmail.com>
Co-authored-by: Jorgen Lundman <lundman@lundman.net>
andrewc12 added a commit to andrewc12/openzfs that referenced this issue Oct 9, 2022
* Avoid dirtying the final TXGs when exporting a pool

There are two codepaths than can dirty final TXGs:

1) If calling spa_export_common()->spa_unload()->
   spa_unload_log_sm_flush_all() after the spa_final_txg is set, then
   spa_sync()->spa_flush_metaslabs() may end up dirtying the final
   TXGs. Then we have the following panic:
   Call Trace:
    <TASK>
    dump_stack_lvl+0x46/0x62
    spl_panic+0xea/0x102 [spl]
    dbuf_dirty+0xcd6/0x11b0 [zfs]
    zap_lockdir_impl+0x321/0x590 [zfs]
    zap_lockdir+0xed/0x150 [zfs]
    zap_update+0x69/0x250 [zfs]
    feature_sync+0x5f/0x190 [zfs]
    space_map_alloc+0x83/0xc0 [zfs]
    spa_generate_syncing_log_sm+0x10b/0x2f0 [zfs]
    spa_flush_metaslabs+0xb2/0x350 [zfs]
    spa_sync_iterate_to_convergence+0x15a/0x320 [zfs]
    spa_sync+0x2e0/0x840 [zfs]
    txg_sync_thread+0x2b1/0x3f0 [zfs]
    thread_generic_wrapper+0x62/0xa0 [spl]
    kthread+0x127/0x150
    ret_from_fork+0x22/0x30
    </TASK>

2) Calling vdev_*_stop_all() for a second time in spa_unload() after
   spa_export_common() unnecessarily delays the final TXGs beyond what
   spa_final_txg is set at.

Fix this by performing the check and call for
spa_unload_log_sm_flush_all() before the spa_final_txg is set in
spa_export_common(). Also check if the spa_final_txg has already been
set in spa_unload() and skip those calls in this case.

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: George Amanakis <gamanakis@gmail.com>
External-issue: https://www.illumos.org/issues/9081
Closes openzfs#13048 
Closes openzfs#13098

* Add spa _os() hooks

Add hooks for when spa is created, exported, activated and
deactivated. Used by macOS to attach iokit, and lock
kext as busy (to stop unloads).

Userland, Linux, and, FreeBSD have empty stubs.

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Jorgen Lundman <lundman@lundman.net>
Closes openzfs#12801

* Windows: Add spa_os() hooks

use the common spa prototypes instead of the windows specific ones

Signed-off-by: Andrew Innes <andrew.c12@gmail.com>

Signed-off-by: Andrew Innes <andrew.c12@gmail.com>
Co-authored-by: George Amanakis <gamanakis@gmail.com>
Co-authored-by: Jorgen Lundman <lundman@lundman.net>
snajpa pushed a commit to vpsfreecz/zfs that referenced this issue Oct 22, 2022
There are two codepaths than can dirty final TXGs:

1) If calling spa_export_common()->spa_unload()->
   spa_unload_log_sm_flush_all() after the spa_final_txg is set, then
   spa_sync()->spa_flush_metaslabs() may end up dirtying the final
   TXGs. Then we have the following panic:
   Call Trace:
    <TASK>
    dump_stack_lvl+0x46/0x62
    spl_panic+0xea/0x102 [spl]
    dbuf_dirty+0xcd6/0x11b0 [zfs]
    zap_lockdir_impl+0x321/0x590 [zfs]
    zap_lockdir+0xed/0x150 [zfs]
    zap_update+0x69/0x250 [zfs]
    feature_sync+0x5f/0x190 [zfs]
    space_map_alloc+0x83/0xc0 [zfs]
    spa_generate_syncing_log_sm+0x10b/0x2f0 [zfs]
    spa_flush_metaslabs+0xb2/0x350 [zfs]
    spa_sync_iterate_to_convergence+0x15a/0x320 [zfs]
    spa_sync+0x2e0/0x840 [zfs]
    txg_sync_thread+0x2b1/0x3f0 [zfs]
    thread_generic_wrapper+0x62/0xa0 [spl]
    kthread+0x127/0x150
    ret_from_fork+0x22/0x30
    </TASK>

2) Calling vdev_*_stop_all() for a second time in spa_unload() after
   spa_export_common() unnecessarily delays the final TXGs beyond what
   spa_final_txg is set at.

Fix this by performing the check and call for
spa_unload_log_sm_flush_all() before the spa_final_txg is set in
spa_export_common(). Also check if the spa_final_txg has already been
set in spa_unload() and skip those calls in this case.

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: George Amanakis <gamanakis@gmail.com>
External-issue: https://www.illumos.org/issues/9081
Closes openzfs#13048 
Closes openzfs#13098
snajpa pushed a commit to vpsfreecz/zfs that referenced this issue Oct 22, 2022
There are two codepaths than can dirty final TXGs:

1) If calling spa_export_common()->spa_unload()->
   spa_unload_log_sm_flush_all() after the spa_final_txg is set, then
   spa_sync()->spa_flush_metaslabs() may end up dirtying the final
   TXGs. Then we have the following panic:
   Call Trace:
    <TASK>
    dump_stack_lvl+0x46/0x62
    spl_panic+0xea/0x102 [spl]
    dbuf_dirty+0xcd6/0x11b0 [zfs]
    zap_lockdir_impl+0x321/0x590 [zfs]
    zap_lockdir+0xed/0x150 [zfs]
    zap_update+0x69/0x250 [zfs]
    feature_sync+0x5f/0x190 [zfs]
    space_map_alloc+0x83/0xc0 [zfs]
    spa_generate_syncing_log_sm+0x10b/0x2f0 [zfs]
    spa_flush_metaslabs+0xb2/0x350 [zfs]
    spa_sync_iterate_to_convergence+0x15a/0x320 [zfs]
    spa_sync+0x2e0/0x840 [zfs]
    txg_sync_thread+0x2b1/0x3f0 [zfs]
    thread_generic_wrapper+0x62/0xa0 [spl]
    kthread+0x127/0x150
    ret_from_fork+0x22/0x30
    </TASK>

2) Calling vdev_*_stop_all() for a second time in spa_unload() after
   spa_export_common() unnecessarily delays the final TXGs beyond what
   spa_final_txg is set at.

Fix this by performing the check and call for
spa_unload_log_sm_flush_all() before the spa_final_txg is set in
spa_export_common(). Also check if the spa_final_txg has already been
set in spa_unload() and skip those calls in this case.

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: George Amanakis <gamanakis@gmail.com>
External-issue: https://www.illumos.org/issues/9081
Closes openzfs#13048 
Closes openzfs#13098
snajpa pushed a commit to vpsfreecz/zfs that referenced this issue Oct 23, 2022
There are two codepaths than can dirty final TXGs:

1) If calling spa_export_common()->spa_unload()->
   spa_unload_log_sm_flush_all() after the spa_final_txg is set, then
   spa_sync()->spa_flush_metaslabs() may end up dirtying the final
   TXGs. Then we have the following panic:
   Call Trace:
    <TASK>
    dump_stack_lvl+0x46/0x62
    spl_panic+0xea/0x102 [spl]
    dbuf_dirty+0xcd6/0x11b0 [zfs]
    zap_lockdir_impl+0x321/0x590 [zfs]
    zap_lockdir+0xed/0x150 [zfs]
    zap_update+0x69/0x250 [zfs]
    feature_sync+0x5f/0x190 [zfs]
    space_map_alloc+0x83/0xc0 [zfs]
    spa_generate_syncing_log_sm+0x10b/0x2f0 [zfs]
    spa_flush_metaslabs+0xb2/0x350 [zfs]
    spa_sync_iterate_to_convergence+0x15a/0x320 [zfs]
    spa_sync+0x2e0/0x840 [zfs]
    txg_sync_thread+0x2b1/0x3f0 [zfs]
    thread_generic_wrapper+0x62/0xa0 [spl]
    kthread+0x127/0x150
    ret_from_fork+0x22/0x30
    </TASK>

2) Calling vdev_*_stop_all() for a second time in spa_unload() after
   spa_export_common() unnecessarily delays the final TXGs beyond what
   spa_final_txg is set at.

Fix this by performing the check and call for
spa_unload_log_sm_flush_all() before the spa_final_txg is set in
spa_export_common(). Also check if the spa_final_txg has already been
set in spa_unload() and skip those calls in this case.

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: George Amanakis <gamanakis@gmail.com>
External-issue: https://www.illumos.org/issues/9081
Closes openzfs#13048 
Closes openzfs#13098
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Type: Defect Incorrect behavior (e.g. crash, hang)
Projects
None yet
Development

No branches or pull requests

2 participants