Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Strange behavior of idmapping for unprivileged isolated containers #1294

Closed
foxtrotcz opened this issue Oct 6, 2024 · 4 comments
Closed
Assignees
Labels
Bug Confirmed to be a bug
Milestone

Comments

@foxtrotcz
Copy link

foxtrotcz commented Oct 6, 2024

I discovered two strange behaviors of Incus when setting subgid/subuid files and creating isolated containers.

1) First
When using Incus 6.0.1 from Debian Bookworm backports or Incus 6.0.2 from Debian Trixie. Also happened when I tried self-compiling. Clean install, all settings default.
In this case Incus needs 1 more ID range than necessary.

Example and how to replicate:
We want to create 2 isolated containers and set subgid/subuid at minimum range needed.
By default Incus starts isolated containers at 65536 position and uses range 65536 for each. That means 3*65536 = 196608 should be enough.
We set subgid/subuid:
root:1000000:196608

This should allow us to use host IDs: 1 000 000 ........ 1 196 607

Incus should then map IDs of isolated containers like this:

  1. container
    0 ............... 1 065 536
    65 535 .... 1 131 071
  2. container
    0 ............... 1 131 072
    65 535 .... 1 196 607

But this doesnt work and Incus complains that for 2. container there is not enough gid/uid available.
Error: Failed instance creation: Failed creating instance record: Failed initializing instance: Not enough uid/gid available for the container
This error comes from here:

return nil, 0, fmt.Errorf("Not enough uid/gid available for the container")

To get around this you need to set subgid/subuid range one ID larger:
root:1000000:196609

Seems like maybe small bug.

2) Second
When using Incus 6.0.2 or Incus Daily build from Zabbly repo on Debian Bookworm. Clean install, all settings default.
This is much simpler. In this case Incus never complains about uid/gid even if I set small range and use more isolated containers than should be possible.

Example and how to replicate:
We set subgid/subuid:
root:1000000:80000
This shouldnt be enough even for one isolated container.

If you try creating any number of isolated containers it works even when it shouldnt.
When checking /run/incus/container/lxc.conf you can see isolated containers mapped out of range.
When trying unisolated container it's range is set as 1 000 000 000.
So it looks like Incus dosnt see the setting of subgid/subuid and uses it's default ranges as described here: https://linuxcontainers.org/incus/docs/main/userns-idmap/

@stgraber stgraber added the Bug Confirmed to be a bug label Oct 18, 2024
@stgraber stgraber self-assigned this Oct 18, 2024
@stgraber stgraber added this to the incus-6.7 milestone Oct 18, 2024
@foxtrotcz
Copy link
Author

About the 2. issue.
I read this https://discuss.linuxcontainers.org/t/trouble-with-idmaps-in-restricted-incus-container/21797/7
So it looks like it's because Zabbly package doesnt install uidmap package.
Not sure if thats bug or by design.

@stgraber
Copy link
Member

Figured the off by one issue:

root@v1:~# incus profile show default | grep idmap
  security.idmap.isolated: "true"
  security.idmap.size: "65536"
root@v1:~# incus launch images:ubuntu/24.04 unisol1 -c security.idmap.isolated=false
Launching unisol1
root@v1:~# incus launch images:ubuntu/24.04 unisol2 -c security.idmap.isolated=false
Launching unisol2
root@v1:~# incus launch images:ubuntu/24.04 unisol3 -c security.idmap.isolated=false
Launching unisol3
root@v1:~# incus launch images:ubuntu/24.04 isol1
Launching isol1
root@v1:~# incus launch images:ubuntu/24.04 isol2
Launching isol2
root@v1:~# incus launch images:ubuntu/24.04 isol3
Launching isol3
Error: Failed instance creation: Failed creating instance record: Failed initializing instance: Not enough uid/gid available for the container
root@v1:~# cat /etc/subuid
root:1000000:196608
root@v1:~# cat /etc/subgid
root:1000000:196608
root@v1:~# for i in $(incus list -cn -fcsv); do echo "$i => $(incus exec $i -- cat /proc/self/uid_map)"; done
isol1 =>          0    1065536      65536
isol2 =>          0    1131072      65536
unisol1 =>          0    1000000      65536
unisol2 =>          0    1000000      65536
unisol3 =>          0    1000000      65536
root@v1:~# 

@stgraber
Copy link
Member

  1. is expected behavior. Basically Incus will respect subuid/subgid if newuidmap/newgidmap is present on the system (uidmap package on Debian/ubuntu). The Zabbly packages have logic to put in valid entries for the root user during installation.

HOWEVER, in most cases, the system having uidmap installed is just an annoyance as it forces the user to keep calculating and updating maps. It's useful to go through that if you have multiple container managers or have needs for unprivileged users to directly consume uid/gid maps, but for most users this isn't needed and is just annoying. That's why my own packages are designed to work fine on a system with uidmap installed, but do not pull it in as a dependency.

@foxtrotcz
Copy link
Author

Thanks for answer and quick fix of the bug.

@hallyn hallyn closed this as completed in c3dcb98 Oct 22, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Bug Confirmed to be a bug
Development

No branches or pull requests

2 participants