Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Podman run with volume permissions issue - different behavior than Docker #10606

Closed
yuzhouliu-solace opened this issue Jun 8, 2021 · 6 comments
Assignees
Labels
kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments.

Comments

@yuzhouliu-solace
Copy link

yuzhouliu-solace commented Jun 8, 2021

Is this a BUG REPORT or FEATURE REQUEST? (leave only one on its own line)

/kind bug

Description

I'm migrating from Docker over to Podman. I'm experimenting with rootfull podman. Within the container I'm running, the processes run as user 1000001. First I create a volume, then do a podman run with that volume.
The container has a check script that checks the user 1000001 inside container can write to the mounted directory path.
This fails with Podman, but passes with Docker.

Steps to reproduce the issue:

  1. sudo podman volume create storage-group

  2. sudo podman run -d --volume storage-group:/var/lib/solace solace-pubsub-standard:9.9.0.28

  3. sudo podman logs solace and see that permissions issue occurred, and container stopped

Describe the results you received:
The non-root container user does not have write permission to the mounted directory path at /var/lib/solace

On the host, the Podman volume has the following permissions.

[ec2-user@ip-172-31-15-15 ~]$ sudo ls -al /var/lib/containers/storage/volumes/
...
drwx------.  3 root root  19 Jun  8 20:59 storage-group

[ec2-user@ip-172-31-15-15 ~]$ sudo ls -al /var/lib/containers/storage/volumes/storage-group
...
drwxr-xr-x. 2 root root  6 Jun  8 20:59 _data

Notice the _data directory is read-only for group and other. If I chmod 777 /var/lib/containers/storage/volumes/storage-group/_data, then the issue is resolved and container boots up.

Describe the results you expected:

On Docker where this does work, the directory permissions are:

# After creating volume, and before docker run
[ec2-user@ip-172-31-5-83 ~]$ sudo ls -al /var/lib/docker/volumes/
...
drwx-----x.  3 root root     19 Jun  8 20:56 storage-group

[ec2-user@ip-172-31-5-83 ~]$ sudo ls -al /var/lib/docker/volumes/storage-group
...
drwxr-xr-x. 2 root root  6 Jun  8 20:56 _data


# After docker run
[ec2-user@ip-172-31-5-83 ~]$ sudo ls -al /var/lib/docker/volumes/
...
drwx-----x.  3 root root     19 Jun  8 20:56 storage-group

[ec2-user@ip-172-31-5-83 ~]$ sudo ls -al /var/lib/docker/volumes/storage-group
...
drwxrwxrwx. 9 root root 165 Jun  8 20:57 _data

Notice in Docker's case, after running the container, the _data directory is drwxrwxrwx.

Additional information you deem important (e.g. issue happens only occasionally):
I just would like to better understand the behavior difference between Docker and Podman. Or see if this is a bug.
If this is an issue with the container itself I can change the behavior there, but would like guidance on best practices.

Output of podman version:

Version:      3.0.2-dev
API Version:  3.0.0
Go Version:   go1.15.7
Built:        Wed Apr  7 08:36:54 2021
OS/Arch:      linux/amd64

Output of podman info --debug:

host:
  arch: amd64
  buildahVersion: 1.19.8
  cgroupManager: systemd
  cgroupVersion: v1
  conmon:
    package: conmon-2.0.26-1.module+el8.4.0+10607+f4da7515.x86_64
    path: /usr/bin/conmon
    version: 'conmon version 2.0.26, commit: b883692702312720058141f16b6002ab26ead2e7'
  cpus: 2
  distribution:
    distribution: '"rhel"'
    version: "8.4"
  eventLogger: file
  hostname: ip-172-31-15-15.ec2.internal
  idMappings:
    gidmap: null
    uidmap: null
  kernel: 4.18.0-305.el8.x86_64
  linkmode: dynamic
  memFree: 181899264
  memTotal: 3915423744
  ociRuntime:
    name: runc
    package: runc-1.0.0-70.rc92.module+el8.4.0+10607+f4da7515.x86_64
    path: /usr/bin/runc
    version: 'runc version spec: 1.0.2-dev'
  os: linux
  remoteSocket:
    path: /run/podman/podman.sock
  security:
    apparmorEnabled: false
    capabilities: CAP_NET_RAW,CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
    rootless: false
    seccompEnabled: true
    selinuxEnabled: true
  slirp4netns:
    executable: ""
    package: ""
    version: ""
  swapFree: 0
  swapTotal: 0
  uptime: 8h 11m 55.71s (Approximately 0.33 days)
registries:
  search:
  - registry.access.redhat.com
  - registry.redhat.io
  - docker.io
store:
  configFile: /etc/containers/storage.conf
  containerStore:
    number: 1
    paused: 0
    running: 1
    stopped: 0
  graphDriverName: overlay
  graphOptions:
    overlay.mountopt: nodev,metacopy=on
  graphRoot: /var/lib/containers/storage
  graphStatus:
    Backing Filesystem: xfs
    Native Overlay Diff: "false"
    Supports d_type: "true"
    Using metacopy: "true"
  imageStore:
    number: 3
  runRoot: /run/containers/storage
  volumePath: /var/lib/containers/storage/volumes
version:
  APIVersion: 3.0.0
  Built: 1617784614
  BuiltTime: Wed Apr  7 08:36:54 2021
  GitCommit: ""
  GoVersion: go1.15.7
  OsArch: linux/amd64
  Version: 3.0.2-dev

Package info (e.g. output of rpm -q podman or apt list podman):

podman-3.0.1-6.module+el8.4.0+10607+f4da7515.x86_64

Have you tested with the latest version of Podman and have you checked the Podman Troubleshooting Guide? (https://github.com/containers/podman/blob/master/troubleshooting.md)

No, I haven't tested with latest 3.2.0 version.
Yes, I've read "Can't use volume mount, get permission denied", tried all 3 points, still have issue.

Additional environment details (AWS, VirtualBox, physical, etc.):

AWS EC2 instance running RHEL 8.4

@openshift-ci openshift-ci bot added the kind/bug Categorizes issue or PR as related to a bug. label Jun 8, 2021
@rhatdan
Copy link
Member

rhatdan commented Jun 9, 2021

Does solace-pubsub-standard:9.9.0.28 chown the directory to 777?

@yuzhouliu-solace
Copy link
Author

No, it only does a check that user 1000001 inside the container can write to the mounted path --volume storage-group:/var/lib/solace, i.e. user 1000001 can write to /var/lib/solace inside the container.

Should it do a chown, or chmod?

@yuzhouliu-solace
Copy link
Author

yuzhouliu-solace commented Jun 9, 2021

Another data point...

No existing volumes, create a container, with volume to container path
sudo podman create -v storage-group:/var/lib/solace --shm-size=1g solace-pubsub-standard:9.9.0.28

[ec2-user@ip-172-31-15-15 ~]$ sudo ls -al /var/lib/containers/storage/volumes/storage-group
total 0
drwx------. 3 root root 19 Jun  9 13:41 .
drwx------. 3 root root 27 Jun  9 13:41 ..
drwxr-xr-x. 2 root root  6 Jun  9 13:41 _data

^ Notice the ownership is root and permissions drwxr-xr-x

As soon as I do sudo podman start <container id>

[ec2-user@ip-172-31-15-15 ~]$ sudo ls -al /var/lib/containers/storage/volumes/storage-group
total 0
drwx------. 3 root    root  19 Jun  9 13:41 .
drwx------. 3 root    root  27 Jun  9 13:41 ..
drwxr-xr-x. 9 1000001 root 165 Jun  9 13:41 _data

^ Notice the ownership changes to the container user 1000001 automatically.
Within the container I don't have any errors anymore.

I guess my question is, when we pre-create volume using sudo podman volume create <> it obviously cannot know the non-root container user id that will be attached later. So it creates it as owner=root. But once you do a sudo podman run -v <volume>:<container_path>, podman should change the ownership, right?
Or is that left to the user to manually do chown on that host volume path?

@Luap99
Copy link
Member

Luap99 commented Jun 9, 2021

I just pulled the image, the directory /var/lib/solace is 777 inside the image. The volume is not preserving the correct permissions. I believe #10531 should fix this.

@davidkhala
Copy link

davidkhala commented Jun 14, 2022

I guess my question is, when we pre-create volume using sudo podman volume create <> it obviously cannot know the non-root container user id that will be attached later. So it creates it as owner=root. But once you do a sudo podman run -v :<container_path>, podman should change the ownership, right?

After one year, I still have similar issue on podman 3.4.2. Not sure if it is still the case.
EDIT:It seems not a issue in podman 4.0.2

@rhatdan
Copy link
Member

rhatdan commented Jun 14, 2022

The current release of Podman is 4.1

@github-actions github-actions bot added the locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. label Sep 20, 2023
@github-actions github-actions bot locked as resolved and limited conversation to collaborators Sep 20, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments.
Projects
None yet
Development

No branches or pull requests

4 participants