Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

finalize-staged: Ensure /boot and /sysroot automounts don't expire #2544

Merged
merged 2 commits into from
Aug 30, 2022

Conversation

dbnicholson
Copy link
Member

If /boot or /sysroot are automounts, then the unit will be stopped
as soon as the automounts expire. That's would defeat the purpose of
using systemd to delay finalizing the deployment until shutdown. This is
not uncommon as systemd-gpt-auto-generator will create an automount
unit for /boot when it's the EFI System Partition and there's no fstab
entry.

Instead of relying on systemd to run the command via ExecStop at the
appropriate time, have finalize-staged open /boot and /sysroot and
then block on SIGTERM. Having the directories open will prevent the
automounts from expiring, and then we presume that systemd will send
SIGTERM when it's time for the service to stop. Finalizing the
deployment still happens when the service is stopped. The difference is
that the process is already running.

In order to keep from blocking legitimate sysroot activity prior to
shutdown, the sysroot lock is only taken after the signal has been
received. Similarly, the sysroot is reloaded to ensure the state of the
deployments is current.

Fixes: #2543

@dbnicholson
Copy link
Member Author

dbnicholson commented Feb 16, 2022

This passed the test suite for me locally but it's otherwise totally untested.

src/ostree/ot-admin-builtin-finalize-staged.c Outdated Show resolved Hide resolved
src/ostree/ot-admin-builtin-finalize-staged.c Show resolved Hide resolved
src/ostree/ot-admin-builtin-finalize-staged.c Outdated Show resolved Hide resolved
src/ostree/ot-admin-builtin-finalize-staged.c Outdated Show resolved Hide resolved
src/ostree/ot-admin-builtin-finalize-staged.c Outdated Show resolved Hide resolved
@cgwalters
Copy link
Member

--- FAIL: fcos.upgrade.basic/upgrade-from-current (25.94s)
[2022-02-16T23:42:10.925Z]             basic.go:328: expected reboot into version 35.20220216.dev.0.kola, but got version 35.20220216.dev.0

Hmm, that looks likely to be related to this.

@cgwalters
Copy link
Member

Feb 16 23:41:46.048773 unknown[1309]: g_object_unref: assertion 'G_IS_OBJECT (object)' failed
Feb 16 23:41:46.050395 ostree[1309]: error: Read-only file system
Feb 16 23:41:46.050743 systemd[1]: Stopping OSTree Finalize Staged Deployment...
Feb 16 23:41:46.052224 systemd[1]: dbus-broker.service: Deactivated successfully.
Feb 16 23:41:46.054235 systemd[1]: ostree-finalize-staged.service: Main process exited, code=exited, status=1/FAILURE
Feb 16 23:41:46.054465 systemd[1]: ostree-finalize-staged.service: Failed with result 'exit-code'.
Feb 16 23:41:46.054840 systemd[1]: Stopped OSTree Finalize Staged Deployment.

@dbnicholson
Copy link
Member Author

I rewrote this to use g_unix_signal_add + g_main_context_integration and it seems nicer.

@dbnicholson
Copy link
Member Author

Feb 16 23:41:46.048773 unknown[1309]: g_object_unref: assertion 'G_IS_OBJECT (object)' failed
Feb 16 23:41:46.050395 ostree[1309]: error: Read-only file system
Feb 16 23:41:46.050743 systemd[1]: Stopping OSTree Finalize Staged Deployment...
Feb 16 23:41:46.052224 systemd[1]: dbus-broker.service: Deactivated successfully.
Feb 16 23:41:46.054235 systemd[1]: ostree-finalize-staged.service: Main process exited, code=exited, status=1/FAILURE
Feb 16 23:41:46.054465 systemd[1]: ostree-finalize-staged.service: Failed with result 'exit-code'.
Feb 16 23:41:46.054840 systemd[1]: Stopped OSTree Finalize Staged Deployment.

This is because I changed to to use OSTREE_ADMIN_BUILTIN_FLAG_UNLOCKED so that initially the sysroot is not locked. However, I missed the part about setting up the mount namespace so that /boot and /sysroot can be remounted read-write. I wanted to factor out the code in ostree_admin_option_context_parse, but I got a little confused about the different use cases and left a FIXME. If you see a good way to handle that, I'll give it a shot.

Copy link
Member

@cgwalters cgwalters left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks generally good to me. Did you test this locally now?

At least our CI will cover this pretty well, but we just need to check any failures there versus the random flakes.

* FIXME: This overlaps with the mount namespace handling in
* ostree_admin_option_context_parse. That should be factored out.
*/
if (unshare (CLONE_NEWNS) < 0)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ohhh, right. I think it would be cleaner is to have systemd do this for us via e.g.
MountFlags=slave
in the unit. See e.g.
https://github.com/coreos/rpm-ostree/blob/baa2eba2d77647204be2baff2b385e16220e0fac/src/daemon/rpm-ostreed.service.in#L11
(In rpm-ostree we always do everything via the daemon; unfortunately that's not true of ostree today, but in this case we know we're running in systemd and can use its features)

(Although I guess this direction conflicts with people who are trying to do ostree without systemd, but...I think the onus is going to be increasingly on them to at least parse a subset of unit files)

But, OK as is too - i.e. we can merge this and do followups.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just to make sure I'm following this right, because the unit has ProtectHome and ReadOnlyPaths, then systemd is already going to setup a mount namespace and there's no reason to create another one. Is that right? I sorta feel like you'd want to have code to check whether the process was in its own namespace or not.

Another thing I thought of while looking at this section. The unit talks about how only ProtectHome can be used, but then the sysroot goes and remounts /boot and /sysroot readwrite. Furthermore, it talks about how it needs to remove /var/.updated, but isn't that within the stateroot /var? Oh, I see that there's some handling for /var as a separate mount. Anways, this all looks like it could be handles with:

ProtectSystem=strict
ReadWritePaths=/var

with the mount handling. Maybe another day, though.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just to make sure I'm following this right, because the unit has ProtectHome and ReadOnlyPaths, then systemd is already going to setup a mount namespace and there's no reason to create another one.

Ah yep, those already imply MountFlags=slave indeed.

Is that right? I sorta feel like you'd want to have code to check whether the process was in its own namespace or not.

I think the simplest is to just error out if we're not running under systemd. The pattern I've used elsewhere is to check the INVOCATION_ID env var.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Personally I've definitely called ostree admin finalize-staged manually when debugging stuff. I could fake out the INVOCATION_ID I guess, but maybe more elegant to do:

  • if in systemd unit, don't unshare
  • otherwise unshare or die trying

?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, fine by me.

But, one can also run systemctl stop ostree-finalize-staged.service too - and if we bailed out if INVOCATION_ID wasn't set, you'd probably know to do so.

@dbnicholson
Copy link
Member Author

I think the LGTM issue should be fixed by #2545.

@dbnicholson
Copy link
Member Author

Looks generally good to me. Did you test this locally now?

At least our CI will cover this pretty well, but we just need to check any failures there versus the random flakes.

Yeah, thanks for the kola tests. I kinda wish I could get that environment going locally.

Let me try to take this for a spin in my problematic VM with the /boot automount and make sure it actually does what I think it will.

@dbnicholson
Copy link
Member Author

No changes in that last force push. Just wanted to rebase on the LGTM fix.

@lgtm-com
Copy link

lgtm-com bot commented Feb 17, 2022

This pull request introduces 1 alert when merging aad2d75 into 12cafbc - view on LGTM.com

new alerts:

  • 1 for FIXME comment

*/
if (unshare (CLONE_NEWNS) < 0)
return glnx_throw_errno_prefix (error, "setting up mount namespace: unshare(CLONE_NEWNS)");
ostree_sysroot_set_mount_namespace_in_use (sysroot);
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm, this doesn't like that the sysroot is already loaded:

Feb 17 13:11:58 endless ostree[2472]: ostree_sysroot_set_mount_namespace_in_use: assertion 'self->loadstate < OSTREE_SYSROOT_LOAD_STATE_LOADED' failed

Since the pre-SIGTERM part just needs ostree_sysroot_get_fd, I guess what would help is to only initialize but not load. I think that would need a new OSTREE_ADMIN_BUILTIN_FLAG_NO_LOAD or something like that.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If it helps we can move this entire thing into a ostree_cmd__private__ ()->ostree_finalize_staged that can use all the internal APIs.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A good reference for this is ostree-system-generator.c - the binary just calls into a private API from the library.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe, but the complication is in loading the sysroot appropriately, which currently always happens in the builtins (and ostree-system-generator doesn't do). I think there are advantages to moving the whole thing internal, but the downside is repeating all of the setup code in src/ostree.

@dbnicholson
Copy link
Member Author

Fun times. At least in my VM, when systemd sets up a mount namespace (due to either of ProtectHome or ReadOnlyPaths), then autofs doesn't consider the open /boot FD and expires the mount. If I remove both of ProtectHome and ReadOnlyPaths, then it works fine. I also found https://bugzilla.redhat.com/show_bug.cgi?id=1569146 and https://www.spinics.net/lists/autofs/msg01694.html.

@cgwalters
Copy link
Member

cgwalters commented Feb 18, 2022

At least in my VM, when systemd sets up a mount namespace (due to either of ProtectHome or ReadOnlyPaths), then autofs doesn't consider the open /boot FD and expires the mount. If I remove both of ProtectHome and ReadOnlyPaths, then it works fine. I also found https://bugzilla.redhat.com/show_bug.cgi?id=1569146 and https://www.spinics.net/lists/autofs/msg01694.html.

😢

So...a more complex solution is something like this:

ExecStart=+ostree admin finalize-staged --hold-boot
ExecStart=ostree admin finalize-staged

The --hold-boot process runs in the root mount namespace, and holds a fd open and waits for SIGTERM. When it gets SIGTERM, it creates /run/ostree-finalize-proceed, and then waits for /run/ostree-finalize-done to exist. The main finalize-staged process waits for SIGTERM, then waits for /run/ostree-finalized-proceed to exist. The main process completes finalization, touches /run/ostree-finalize-done, then both processes exit.

(There may be a bit more elegant way to do IPC between these two processes; this relates to https://lists.freedesktop.org/archives/systemd-devel/2021-February/046112.html )

Or, we could just drop the MountFlags bits from the unit, run in the root ns, and call unshare() as you were originally doing.

EDIT: To explain this more, notice the additional¹ + character in ExecStart=+ostree admin finalize-staged --hold-boot which ensures that process runs with full privileges, not namespaced etc.

¹ 😉

@dbnicholson
Copy link
Member Author

So...a more complex solution is something like this:

ExecStart=+ostree admin finalize-staged --hold-boot
ExecStart=ostree admin finalize-staged

This is pretty clever, but I don't think it would work. Once the initial process in the root namespace exits, then there won't be anything keeping /boot from expiring since the subsequent command is in a separate namespace. Still, the + is nice to know since there are many, many reasons why systemd will use a mount namespace.

I think the only way to do this is to handle the mount namespace in process as I was doing on my last attempt while ensuring it's run in the root namespace. Then the initial /boot FD stays open in the root namespace while the rest happens in the new namespace. You'd basically have to recreate everything systemd is doing, but I don't think it would be that complex and we'd have full control over exactly what should be mounted. For example, /var in the stateroot vs /var as separate filesystem. Well, after reading the systemd namespace setup it's a little more complex than I was thinking.

BTW, I now recreated this autofs bug as simple as possible in a rawhide VM and even made it use automount(8) rather than systemd to ensure it's really an autofs bug and not a systemd automount daemon bug. I'm going to file the kernel bug after I've finished gathering all the data.

@cgwalters
Copy link
Member

Once the initial process in the root namespace exits, then there won't be anything keeping /boot from expiring since the subsequent command is in a separate namespace.

In my proposal, both processes stay active until they are both complete.

@dbnicholson
Copy link
Member Author

Once the initial process in the root namespace exits, then there won't be anything keeping /boot from expiring since the subsequent command is in a separate namespace.

In my proposal, both processes stay active until they are both complete.

Ah, I missed that. That would work and removes the complication of handling all the mounts manually. I'll try to give that a shot (although I probably have to switch on to other tasks for a while).

I filed https://bugzilla.redhat.com/show_bug.cgi?id=2056090.

@jlebon
Copy link
Member

jlebon commented Feb 22, 2022

Can't a service unit only have a single top-level process though? systemd.service(5) says:

Unless Type= is oneshot, exactly one command must be given. When Type=oneshot is used, zero or more commands may be specified. Commands may be specified by providing multiple command lines in the same directive, or alternatively, this

And even in the oneshot case, the commands are executed sequentially.

I think the only way to do this is to handle the mount namespace in process as I was doing on my last attempt while ensuring it's run in the root namespace. Then the initial /boot FD stays open in the root namespace while the rest happens in the new namespace. You'd basically have to recreate everything systemd is doing, but I don't think it would be that complex and we'd have full control over exactly what should be mounted. For example, /var in the stateroot vs /var as separate filesystem. Well, after reading the systemd namespace setup it's a little more complex than I was thinking.

Hmm, I think another simpler approach would be having the fd holding happen in a separate unit without a mount namespace which runs Before=ostree-finalize-staged.service.

@cgwalters
Copy link
Member

Hmm, I think another simpler approach would be having the fd holding happen in a separate unit without a mount namespace which runs Before=ostree-finalize-staged.service.

Yeah, also SGTM.

@dbnicholson
Copy link
Member Author

FWIW, we decided to punt on this and not use staged deployments when /boot is an automount (see endlessm/eos-updater#301). I'll try to take another look at fixing this to ensure the initial part runs in the root namespace, but it's lower priority for me now.

@AdrianVovk
Copy link

carbonOS is using a boot automount, but I don't necessarily see a reason to not just set TimeoutIdleSec=0 to disable auto-unmount on /boot. Of course, it would be nice if OSTree could just handle auto-unmounting /boot, but I don't think it's a super high priority either.

Perhaps it should be documented somewhere that staged deployments are incompatible with /boot timing out (maybe in the API doc for creating a staged deployment?) until this fix lands?

@dbnicholson
Copy link
Member Author

I moved away from this because we were going to EOL the product that this affects, but now that decision has been reversed. My plan here is:

  • Refactor ostree_admin_option_context_parse so that the setup can be composed elsewhere.
  • Add the finalize-staged --hold-boot (or maybe --wait?) option to skip all the setup, open /boot and block until SIGTERM (mostly already in the PR). Make the finalize-staged non-hold-boot option do the rest of the normal sysroot setup and the actual finalizatoin.
  • Add an ostree-finalize-staged-hold-boot.service unit that explicitly runs in the root namespace with ExecStart=+/usr/bin/ostree admin finalize-staged --hold-boot. Add Wants/After=ostree-finalize-staged-hold-boot.service to ostree-finalize-staged.service.

Seem reasonable? I think another way would be to just add a separate finalize-staged-hold-boot command instead of implementing it as an option.

@cgwalters
Copy link
Member

Two systemd units as jlebon originally suggested with two distinct CLI verbs seems conceptually cleanest to me, it's just a bit more verbose to type out.

You're right that there are a ton of options that will cause a mount namespace to be created, but we can just basically not use any unit options in the ostree-finalize-staged-hold-boot.service.

@dbnicholson
Copy link
Member Author

You're right that there are a ton of options that will cause a mount namespace to be created, but we can just basically not use any unit options in the ostree-finalize-staged-hold-boot.service.

I'm going with no options in the unit and the explicit ExecStart=+ notation you mentioned earlier to ensure this is in the root namespace.

I'd like to add a test in tests/kolainst/destructive/ since it seems like I should be tweak the system to have /boot automounted. However, I'd really like to do that locally to speed up iterations instead of cycling through PR checks. Any suggestions on how to do that?

@cgwalters
Copy link
Member

However, I'd really like to do that locally to speed up iterations instead of cycling through PR checks. Any suggestions on how to do that?

A cool thing IMO about https://github.com/coreos/coreos-assembler is that the entire build and test tooling is one (large) container image you can run anywhere you like. Though it also requires doing a build in the corresponding userspace (e.g. f36). But it's basically using https://coreos.github.io/coreos-assembler/working/#using-overrides to drop in the new ostree binaries, plus kola run ext.ostree.destructive. (Actually also need a makesudoinstall -C tests/kolainst to install the tests in the cosa container userspace)

@dbnicholson
Copy link
Member Author

A cool thing IMO about https://github.com/coreos/coreos-assembler is that the entire build and test tooling is one (large) container image you can run anywhere you like. Though it also requires doing a build in the corresponding userspace (e.g. f36). But it's basically using https://coreos.github.io/coreos-assembler/working/#using-overrides to drop in the new ostree binaries, plus kola run ext.ostree.destructive. (Actually also need a makesudoinstall -C tests/kolainst to install the tests in the cosa container userspace)

That thing is really cool. I'm sure I wasn't driving it very well, but that was way faster than how I would have normally done something like that. I wish we had something like that for Endless.

@dbnicholson
Copy link
Member Author

I think this is ready to review again. I ended up not touching the ostree_admin_option_context_parse and instead just calling it twice with different flags depending on the context. That seems to have worked.

The added kola test was extremely helpful to make sure this was actually doing what I thought it would do.

Copy link
Member

@cgwalters cgwalters left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good to me as is, we can address anything else via followup commits too.

Thanks so much for working on this!

@@ -50,13 +65,57 @@ ot_admin_builtin_finalize_staged (int argc, char **argv, OstreeCommandInvocation

g_autoptr(GOptionContext) context = g_option_context_new ("");
g_autoptr(OstreeSysroot) sysroot = NULL;

/* First parse the args without loading the sysroot to see what options are
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm OK with this but...I think we could also bypass the whole thing and just do

bool opt_hold = argc > 2 && strcmp (argv[1], "--hold") == 0;

or so? Or use an environment variable.

Dunno. I don't feel really strongly. My main concern is that someone later comes by and makes some sort of change that isn't ready for ostree_admin_option_context_parse being invoked twice.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Eh, I'm personally not a fan of adding adhoc option handing. What if the first option is --verboseor something else?

I think I can go back to the refactoring route and split out the sysroot opening.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm OK with the code as is too.

Though...hmm, wouldn't it work to use plain old g_option_context_parse() at least? That would avoid any issues with invoking ostree_admin_option_context_parse twice.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it would except that you'd have to duplicate the handling of the --sysroot option at least. I think the refactoring I added to initialize but not load the sysroot should work well. See the new prep commit I added.

It can be useful to parse the options and initialize the sysroot without
actually loading it until later. Factor out the sysroot loading to a new
`ostree_admin_sysroot_load` and add a new
`OSTREE_ADMIN_BUILTIN_FLAG_NO_LOAD` flag to accommodate this.
If `/boot` is an automount, then the unit will be stopped as soon as the
automount expires. That's would defeat the purpose of using systemd to
delay finalizing the deployment until shutdown. This is not uncommon as
`systemd-gpt-auto-generator` will create an automount unit for `/boot`
when it's the EFI System Partition and there's no fstab entry.

To ensure that systemd doesn't stop the service early when the `/boot`
automount expires, introduce a new unit that holds `/boot` open until
it's sent `SIGTERM`. This uses a new `--hold` option for
`finalize-staged` that loads but doesn't lock the sysroot. A separate
unit is used since we want the process to remain active throughout the
finalization run in `ExecStop`. That wouldn't work if it was specified
in `ExecStart` in the same unit since it would be killed before the
`ExecStop` action was run.

Fixes: ostreedev#2543
Copy link
Member

@cgwalters cgwalters left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice, thanks so much for doing this!

Comment on lines +100 to +104
gboolean running = TRUE;
g_unix_signal_add (SIGTERM, sigterm_cb, &running);
g_print ("Waiting for SIGTERM\n");
while (running)
g_main_context_iteration (NULL, TRUE);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm fine with this as is, but in general I think it's even cleaner where possible to not install a SIGTERM handler and let the kernel simply kill the process. That way there's one fewer context switch at shutdown time.

IOW, we could just do:

while (true)
  g_main_context_iteration (NULL, TRUE);

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Very true. I guess I like the SIGTERM handler so that the service exits gracefully instead of showing as failed, although in this case you'd probably never look at the status of the service.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What would show as failed?

[root@cosa-devsh ~]# cat /etc/systemd/system/testunit.service
[Service]
ExecStart=sleep infinity
[root@cosa-devsh ~]# systemctl start testunit
[root@cosa-devsh ~]# systemctl status testunit
● testunit.service
     Loaded: loaded (/etc/systemd/system/testunit.service; static)
     Active: active (running) since Tue 2022-08-30 20:21:23 UTC; 1s ago
   Main PID: 1792 (sleep)
      Tasks: 1 (limit: 1042)
     Memory: 284.0K
        CPU: 1ms
     CGroup: /system.slice/testunit.service
             └─ 1792 sleep infinity

Aug 30 20:21:23 cosa-devsh systemd[1]: Started testunit.service.
[root@cosa-devsh ~]# systemctl stop testunit
[root@cosa-devsh ~]# systemctl status testunit
○ testunit.service
     Loaded: loaded (/etc/systemd/system/testunit.service; static)
     Active: inactive (dead)

Aug 30 20:21:27 cosa-devsh systemd[1]: Stopping testunit.service...
Aug 30 20:21:27 cosa-devsh systemd[1]: testunit.service: Deactivated successfully.
Aug 30 20:21:27 cosa-devsh systemd[1]: Stopped testunit.service.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

PR in #2704

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess I was thinking that systemd would see the non-0 exit status and interpret it as failed. But it makes sense since it's the parent and can process the status appropriately.

@cgwalters cgwalters merged commit 6651b72 into ostreedev:main Aug 30, 2022
@dbnicholson dbnicholson deleted the finalize-block branch August 30, 2022 20:03
cgwalters added a commit to cgwalters/ostree that referenced this pull request Aug 30, 2022
Followup from discussion in
ostreedev#2544 (comment)

This is more efficient; no need to have the kernel context switch
us in at shutdown time just so we can turn around and call
`exit()`.
cgwalters added a commit to cgwalters/ostree that referenced this pull request Aug 30, 2022
Followup from discussion in
ostreedev#2544 (comment)

This is more efficient; no need to have the kernel context switch
us in at shutdown time just so we can turn around and call
`exit()`.
dbnicholson pushed a commit to endlessm/ostree that referenced this pull request Sep 2, 2022
Followup from discussion in
ostreedev/ostree#2544 (comment)

This is more efficient; no need to have the kernel context switch
us in at shutdown time just so we can turn around and call
`exit()`.

(cherry picked from commit 683e4ef)
dbnicholson added a commit to endlessm/eos-updater that referenced this pull request Sep 8, 2022
This reverts commit a19821a. OSTree has
been fixed to support this use case by keeping `/boot` open in the root
namespace until the staged deployment completes finalization. See
ostreedev/ostree#2544 for details.

https://phabricator.endlessm.com/T33775
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Finalizing staged deployments broken on /boot automount
4 participants