Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Changes to group membership are not reflected for current boot #528

Closed
leifliddy opened this issue Jan 7, 2022 · 17 comments
Closed

Changes to group membership are not reflected for current boot #528

leifliddy opened this issue Jan 7, 2022 · 17 comments

Comments

@leifliddy
Copy link

If I change my group membership within a lima vm, I don't see those changes reflected immediately ie

[lima@lima-default ~]$ sudo usermod lima -g games 
[lima@lima-default ~]$ sudo usermod -aG video lima
[lima@lima-default ~]$ id
uid=502(lima) gid=1000(lima) groups=1000(lima) context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023

Of course I've tried logging out and back in again, but to no avail. Rebooting the lima vm works.
newgrp works as well, but I don't like the fact that it opens a new shell.

[lima@lima-default ~]$ newgrp 
[lima@lima-default ~]$ id
uid=502(lima) gid=20(games) groups=20(games),39(video),1000(lima) context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023

Is there a way for the lima (or default) user to see their updated group membership list without resorting to rebooting the VM or by using newgrp?

I need to do this as my primary gid on macos is 20 (and not 1000 as per the lima vm default)

[leif.liddy@macos.example.com ~]$ id
uid=502(leif.liddy) gid=20(staff)....

I need the primary gids to match between macos and lima (it's a long story but it involves accessing mounted host volumes from within a container). It would just be extremely convenient if I didn't have to reboot the lima vm in order to do that.

@afbjorklund
Copy link
Member

afbjorklund commented Jan 8, 2022

You need to kill the ssh socket, since it keeps the previous session alive.

anders@lima-default:/home/anders$ sudo usermod anders -g games
anders@lima-default:/home/anders$ sudo usermod -aG video anders
anders@lima-default:/home/anders$ id
uid=1000(anders) gid=1000(anders) groups=1000(anders)
anders@lima-default:/home/anders$ exit
logout
anders@ubuntu:~$ lima
anders@lima-default:/home/anders$ id
uid=1000(anders) gid=1000(anders) groups=1000(anders)
anders@lima-default:/home/anders$ exit
logout
anders@ubuntu:~$ rm ~/.lima/default/ssh.sock 
anders@ubuntu:~$ lima
anders@lima-default:/home/anders$ id
uid=1000(anders) gid=60(games) groups=60(games),44(video)
anders@lima-default:/home/anders$ exit
logout

By default, lima will keep the mux (ssh.sock) alive for 5 minutes (after logout).

https://en.wikibooks.org/wiki/OpenSSH/Cookbook/Multiplexing

SSH_CONFIG(5)

  • ControlMaster
  • ControlPath
  • ControlPersist

@leifliddy
Copy link
Author

Nice, works perfectly. Thank you!

@afbjorklund
Copy link
Member

afbjorklund commented Jan 8, 2022

This might need to be documented, it also affected important stuff like the prompt color :-)

There is also limactl --debug shell default (instead of just lima), to show more details...

@leifliddy
Copy link
Author

leifliddy commented Jan 9, 2022

So if I start a new Lima machine, change the primary group of the lima user, delete the ssh.sock file, and then run a container that mounts a local directory -- I get this error on the next reboot of the Lima machine when trying to start the container.

  File "/usr/local/lib/python3.9/site-packages/podman/domain/containers.py", line 363, in start
    response.raise_for_status()
  File "/usr/local/lib/python3.9/site-packages/podman/api/client.py", line 65, in raise_for_status
    raise APIError(cause, response=self._response, explanation=message)
podman.errors.exceptions.APIError: 500 Server Error: Internal Server Error (error mounting storage for container 3547b779243fec0c0eca27ffd00cab9356d5415ec41ad9111c2d82ba37f2de7a: open /home/lima.linux/.local/share/containers/storage/overlay/52f21a23ae8ef87c3ff45ca0dd04960c358f3851bf042e763991351c54bbca4f/work/work/incompat: permission denied)

So it looks like I still need to reboot the Lima machine after changing the primary group of the default lima user.
Oh well, it's not a huge inconvenience or anything. Would be kind of cool if there was a setting in the yaml config to force Lima to set the primary gid of the default lima user to whatever the primary gid of the macos user is.

The logic would go something like this:
If a group matching the primary gid already exists on the Lima VM, then make that the primary group of the lima user.
If a group matching the macos primary gid doesn't exist on the Lima VM, then create a group with that gid. Not sure what to call the group, maybe macos or something.....or maybe that could be defined in the in the yaml config as well.
Then you would just need to create a check to ensure that that group name doesn't already exist on the Lima VM.
Anyways, just a thought.....
Thanks for your work on this!

@afbjorklund
Copy link
Member

Deleting the socket file might have been a bit brutal... There is also an "exit" option, might have been better ?

@leifliddy
Copy link
Author

I can try it...how does the "exit" option work?

@afbjorklund
Copy link
Member

Afaict, it should terminate the master more gracefully.

But I don't know how it will affect the sshfs, and it probably doesn't help with the primary group issues described either ?

@leifliddy
Copy link
Author

leifliddy commented Jan 9, 2022

I'm just digging a bit deeper into how lima works.

https://github.com/lima-vm/lima/blob/master/pkg/cidata/cidata.TEMPLATE.d/boot/

I've already sorted a better way to terminate a session, and that's with
loginctl terminate-user lima

That's definitely less brutal than manually removing the ssh.socket file.
I'll figure something out, just need to understand things a bit more.

@jandubois
Copy link
Member

I've already sorted a better way to terminate a session, and that's with loginctl terminate-user lima

Yes, that's what boot/07-etc--environment.sh does to make sure the current session loads the /etc/environment file again (via PAM).

Note that in general the user name is not lima, but the same name as on the host. It is only lima when the host username is not a valid Linux username (in your case because it includes a . in the name).

@leifliddy
Copy link
Author

leifliddy commented Jan 9, 2022

/> Note that in general the user name is not lima
Yeah, I understood that part. Must have been a blanket rule, because some distros' support dots in the username (like Fedora), and some don't (like Ubuntu)

In any case I'm going to leave Lima and go back to podman-machine (was only using Lima to run a VM to run podman) I'm a pretty hardcore Linux user but I have to use a mac for work unfortunately. I really just need to find a suitable replacement for Docker Desktop

I tried out podman-machine a couple weeks ago, really liked it, but i had a few issue with it.
One issue I had was that podman build didn't seem to do anything. But I didn't understand the concept of the the context build directory. The issue was that all of the subdirectories and files in the same directory as the Dockerfile were being copied over to the podman-machine --which was several GB worth of data, that's why it seemed like nothing was happening.
I literally just created a .containerignore file and now everything works perfectly. Well, almost...
I still need to be able to mount local directories to a container.
But....it looks like that issue has recently been resolved.
containers/podman#11454

Thanks for that! Going to try that now....
Hopefully everything will work and I can just transition to podman-machine!

@afbjorklund
Copy link
Member

But....it looks like that issue has recently been resolved.

Note that you need to patch qemu, for "virtfs" to be available on Darwin.

But you could use sshocker, to expose the files the same way as Lima ?

https://github.com/lima-vm/sshocker

@chancez
Copy link
Contributor

chancez commented May 18, 2022

I tried to use the approach of using loginctl terminate-user $USER within the provision script, which did correctly make it so when I use limactl shell my groups are correct, however, it doesn't seem to remount my portForward sockets until I reboot.

I'm trying to configure rootful docker (rootless has some issues for what I'm trying to do) and basically the only way I can get docker to work on my host is to create the VM and restart it, so that the the socket forwarding has the correct permissions to /var/run/docker.sock.

@jandubois
Copy link
Member

@chancez Can't you instead change the permissions on the socket to make it world-read-writable?

@chancez
Copy link
Contributor

chancez commented May 18, 2022

@jandubois I can give it a shot. Based on my reading of issues however, the provision scripts run after the ssh tunnel, so the socket perms would be updated too late.

@jandubois
Copy link
Member

Based on my reading of issues however, the provision scripts run after the ssh tunnel, so the socket perms would be updated too late.

I'm afraid you may be right on this. :(

@chancez
Copy link
Contributor

chancez commented May 18, 2022

Well, that worked, surprisingly. I bet SSH doesn't open the socket until the first attempt to use it, so the permissions are correct by that point.

A snippet of the relevant provision script:

- mode: system
  script: |
    #!/bin/bash
    set -eux -o pipefail
    command -v docker >/dev/null 2>&1 && exit 0
    export DEBIAN_FRONTEND=noninteractive
    curl -fsSL https://get.docker.com | sh
    # hack to make ssh port forward able to access docker
    chmod 777 /var/run/docker.sock

@jandubois
Copy link
Member

I tried to use the approach of using loginctl terminate-user $USER within the provision script, which did correctly make it so when I use limactl shell my groups are correct, however, it doesn't seem to remount my portForward sockets until I reboot.

So if this is how it works, then I don't understand why

if command -v loginctl >/dev/null 2>&1; then
loginctl terminate-user "${LIMA_CIDATA_USER}" || true
fi
doesn't always break the socket forwarding? Is this a race condition?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants