Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

During pool export flush the ARC asynchronously #16215

Open
wants to merge 2 commits into
base: master
Choose a base branch
from

Conversation

don-brady
Copy link
Contributor

@don-brady don-brady commented May 21, 2024

Motivation and Context

When a pool is exported, the ARC buffers in use by that pool are flushed (evicted) as part of the export. In addition, any L2 VDEVs are removed from the L2 ARC. Both of these operations are performed sequentially and inline to the export. For larger ARC footprints, this can represent a significant amount of time. In an HA scenario, this can cause a planned failover to take longer than needed and risk timeouts on the services backed by the pool data.

Description

The teardown of the ARC data used by the pool can be done asynchronously during a pool export. For the main ARC data, the spa load GUID is used to associate a buffer with the spa so we can safely free the spa_t while the teardown proceeds in the background. For the L2 ARC VDEV, the device l2arc_dev_t has a copy of the vdev_t pointer which was being used to calculate the asize of the buffer from the psize when updating the L2 arc stats during the teardown. This asize value can be captured when the buffer is created, thereby eliminating the need for a late binding asize calculation using the VDEV during teardown.

Added an arc_flush_taskq for these background teardown tasks. In arc_fini() (e.g., during module unload) it now waits for any teardown tasks to complete.

Sponsored-by: Klara, Inc.
Sponsored-by: Wasabi Technology, Inc.

How Has This Been Tested?

  1. Manual testing with ARC and multiple L2 arc devices up to about 24G of arc data and ~100GB of L2 capacity. The pool export went from about 45 seconds down to 5 seconds with the asynchronous teardown in place.
  2. Manually tested exporting while a L2 rebuild was still in progress. The L2 vdev waits for the rebuild to be canceled before proceeding with the teardown.
  3. Ran various ZTS test suites, like l2arc, zpool_import, zpool_export, to exercise the changed code paths.
  4. Ran ztest

Types of changes

  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)
  • Performance enhancement (non-breaking change which improves efficiency)
  • Code cleanup (non-breaking change which makes code smaller or more readable)
  • Breaking change (fix or feature that would cause existing functionality to change)
  • Library ABI change (libzfs, libzfs_core, libnvpair, libuutil and libzfsbootenv)
  • Documentation (a change to man pages or other documentation)

Checklist:

@don-brady don-brady added the Status: Code Review Needed Ready for review and testing label May 22, 2024
@behlendorf behlendorf self-assigned this May 25, 2024
@don-brady
Copy link
Contributor Author

@gamanakis -- if you have time could you look at the L2 part of this change? Thanks

@gamanakis
Copy link
Contributor

Thanks for including me on this, on a first pass it looks good.

Copy link
Member

@amotin amotin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks interesting, but what happen if the pool (obviously with the same GUID) is re-imported while async flush is still running?

Comment on lines +9438 to +9614
if ((dev = l2arc_dev_get_next()) == NULL ||
dev->l2ad_spa == NULL) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you please explain l2ad_spa and l2ad_vdev locking semantics, and as part of that how can l2ad_spa be NULL here if we assume the locking is correct, or how places like arc_hdr_l2hdr_destroy() won't catch NULL dereference due to a race?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Note the above dev->l2ad_spa == NULL check was defensive (part of development) but is not necessary since l2arc_dev_invalid() already checks under locks if the spa was removed for a device. l2arc_dev_get_next() will never return with a NULL dev->l2ad_spa.

As far as locking is concerned.
The spa_namespace_lock covers a transition of spa->spa_is_exporting and the removal of a spa_t
The spa config SCL_L2ARC lock protects against a vdev/spa from being remove while in-use
The l2arc_dev_mtx protects the L2 device list and a L2 device's l2ad_spa and l2ad_vdev fields

l2arc_dev_get_next() will hand out L2 devices and returns with the spa config SCL_L2ARC lock held. There are two possible spa exceptions that l2arc_dev_get_next() checks for:

  • spa is being removed (dev->l2ad_spa->spa_is_exporting = B_TRUE) -- protected by spa_namespace_lock
  • spa was removed (dev->l2ad_spa = NULL) -- protected by l2arc_dev_mtx

This means that when an L2 device is being removed, both the l2arc_dev_mtx and a writer spa config SCL_L2ARC lock should be held to prevent any races. Note that the latter is currently not true and will be remedied.

Also, arc_hdr_l2hdr_destroy() is protected by the dev->l2ad_mtx lock. The l2ad_vdev can be null after a pool is exported and an async ARC removal is still in progress.

As indicated above, there is a race that I now see in l2arc_remove_vdev(). It was holding the l2arc_dev_mtx to transition dev->l2ad_spa to NULL but it wasn't taking the spa config SCL_L2ARC (as writer) to let any inflight devices' from a past l2arc_dev_get_next() drain.

@don-brady
Copy link
Contributor Author

Looks interesting, but what happen if the pool (obviously with the same GUID) is re-imported while async flush is still running?

Good question and I tested this case. The async flush will continue it's best-effort pass to flush buffers associated with the exported spa's guid. Any ARC buffers that the import uses will have a positive ref count and be skipped by the async flush task. You can think of it as an alternate arc evict thread that is targeting a specific guid with a zero ref count rather than age.

I suppose we could have the task periodically check if there is an active spa with that guid and exit the task. I'm not sure how common it is to export and then re-import right away on the same host. Before this change, you would have to wait until the export finished flushing the ARC buffers.

The ARC teardown for a spa can take multiple minutes, so you could even have a second pool export+import while the first arc_flush_task() is still running and end up with two arc_flush_task() both looking to evict candidates for the same guid. This is not fatal but a little weird.

@amotin
Copy link
Member

amotin commented Jun 12, 2024

Aside of it being weird, I worry that even if unlikely it is not impossible for the pool to be changed while being exported, while ARC still holds the old data.

@allanjude
Copy link
Contributor

Aside of it being weird, I worry that even if unlikely it is not impossible for the pool to be changed while being exported, while ARC still holds the old data.

I share this concern. Would it make sense to block importing the same pool again until eviction is complete?

@don-brady
Copy link
Contributor Author

Per @amotin -- add a zpool wait for ARC teardown

@don-brady
Copy link
Contributor Author

To address concerns of re-importing after the pool was changed on another host:

Save txg at export and compare to imported txg

  • if same, then cancel the teardown (ARC data is still valid)
  • If different, force import to wait for teardown to complete (ARC data can be stale)

@amotin
Copy link
Member

amotin commented Jun 18, 2024

Thinking more about it, since ARC is indexed on DVA+birth, in case of pool export/import if some blocks appear stale, they should just never be used, unless we actually import some other pool somehow having the same GUID, or import the pool at earlier TXG. We already have somewhat similar problem with persistent L2ARC, when we load into ARC headers blocks that could be long freed from the pool, and they stay in ARC until L2ARC rotate and evict them. But in case of L2ARC we at least know that those stale blocks are from this pool and just won't be used again. I am not sure whether multiple pools with the same GUID is a realistic scenario, but importing the pool at earlier TXG I think may be more realistic, and dangerous same time, since the same TXG numbers may be reused.

@don-brady
Copy link
Contributor Author

Update on re-import while ARC tear-down in progress

Ever since the commit that added zpool reguid feature, the ARC uses the spa_load_guid, not the spa's actual guid for identification. This load guid is transient, not persistent and will change at each import. So after the import, any blocks left in the ARC with this load guid are now orphaned and not associated with any spa.

So we don't need to worry about ARC blocks that are still around when the pool is re-imported since it will identify its blocks using a different spa_load_guid.

@richardelling
Copy link
Contributor

@don-brady I was wondering when you were going to remember the behaviour of spa_load_guid

Copy link
Member

@grwilson grwilson left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Have you looked at having arc_evict prioritize evicting from flushed pools vs trying to evict buffers from active pools?

@don-brady
Copy link
Contributor Author

Have you looked at having arc_evict prioritize evicting from flushed pools vs trying to evict buffers from active pools?
@grwilson

I hadn't considered it other than if it was safe. Both arc_evict() and arc_flush() sit on top of arc_evict_state(). In the flush case it is targeting a specific spa (guid) and in the evict case it ignores targeting (i.e. when guid == 0). So arc_evict_state() is either targeting a specific guid or none at all.

We can have multiple threads (arc_evict() and multiple arc_flush_async()) all running at the same time. And in the underlying arc_evict_state() it's using a randomly selected sublist. So they will likely be working on different buffers.

(a) One option would be to have arc_evict() thread back off when there are any active arc flushes so as to give those flush-initiated evictions priority. And maybe have the last arc flush wake up the arc evict thread.

(b) Another option would be to keep a list of active flush spa guids and have the arc_evict() thread only match on guids from the list if it is not empty -- so it ends up only targeting buffers that need to be flushed.

@don-brady
Copy link
Contributor Author

Rebased to fix merge conflicts.

Eviction thread nows considers active spa flushes and targets those spas (if any).

module/zfs/arc.c Outdated
static uint64_t
arc_evict_state_impl(multilist_t *ml, int idx, arc_buf_hdr_t *marker,
uint64_t spa, uint64_t bytes)
uint64_t bytes, uint64_t spa_list[], unsigned int spa_cnt)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

At some point I've changed the arguments order to makes bytes the last and the targets of the operation first, to make it more readable. I am not getting motivation for this change.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I abandoned the commit with this change (Changed arc evict to prioritize unloaded spas)

module/zfs/arc.c Outdated
Comment on lines 4241 to 4243
uint64_t spa_list[16];
unsigned int spa_cnt =
arc_async_flush_init_spa_list(spa_list, 16);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

arc_evict_impl() is used not only for flushing, but also for regular evictions. I can see some logic in a guess that if we need to flush something, eviction of other data same time may be less productive. But I am not sure we can know that particular state we are asked to evict now has data for the pools being flushed. Any way, since we are already actively flushing, memory pressure is quite unlikely, so I don't think we should get here often.

Also hard-coded array of 16 entries looks dirty.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I abandoned the commit with this change (Changed arc evict to prioritize unloaded spas)

Comment on lines +4544 to +4516
/*
* unlikely, but if we couldn't dispatch then use an inline flush
*/
if (tid == TASKQID_INVALID) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would we add taskq_ent_t into arc_async_flush_t, the error here would be impossible.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are you referring to the chained assignment?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No. About preallocation of the task to avoid failures later.

This also includes removing L2 vdevs asynchronously

Sponsored-by: Klara, Inc.
Sponsored-by: Wasabi Technology, Inc.

Signed-off-by: Don Brady <don.brady@klarasystems.com>
The zpool reguid feature introduced the spa_load_guid, which is a
transient value used for runtime identification purposes in the ARC.
This value is not the same as the spa's persistent pool guid.

However, the value is seeded from spa_generate_load_guid() which
does not check for uniqueness against the spa_load_guid from other
pools.  Although extremely rare, you can end up with two different
pools sharing the same spa_load_guid value!

This change guarantees that the value is always unique and
additionally not still in use by an async arc flush task.

Sponsored-by: Klara, Inc.
Sponsored-by: Wasabi Technology, Inc.

Signed-off-by: Don Brady <don.brady@klarasystems.com>
@don-brady
Copy link
Contributor Author

Removed last commit and rebased to latest master branch.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Status: Code Review Needed Ready for review and testing
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants