Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Processes are not visible to other containers in the pod #7886

Closed
ah83 opened this issue Oct 2, 2020 · 6 comments · Fixed by #7902
Closed

Processes are not visible to other containers in the pod #7886

ah83 opened this issue Oct 2, 2020 · 6 comments · Fixed by #7902
Assignees
Labels
In Progress This issue is actively being worked by the assignee, please do not work on this at this time. kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments.

Comments

@ah83
Copy link

ah83 commented Oct 2, 2020

Is this a BUG REPORT or FEATURE REQUEST? (leave only one on its own line)

/kind bug

Description

When i start multiple containers having the same pod with a shared pid namespace, the container can only see it's own pids.
The shared pid namespace works, because every container in the pod has a different pid number and not pid 1.
Also lsns shows that the pid namespace is shared between the containers.
After some debugging i found out that ps is blocked by selinux, because it can not access the pids from other containers in the proc filesystem.

Steps to reproduce the issue:

  1. Create pod with pid namespace
podman pod create --name test --share=pid
  1. Create first container running sleep
podman run --pod test -it --rm centos:8 sleep 1000
  1. Create second container with bash and run ps aux
podman run --pod test -it --rm centos:8 bash
[root@ed6eb2547653 /]# ps aux
USER         PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
root          56  6.3  0.0  12024  3332 pts/0    Ss   08:20   0:00 bash
root          70  0.0  0.0  43960  3356 pts/0    R+   08:20   0:00 ps aux

Describe the results you received:
A listing of /proc in the second container shows pid 1 and the pids of the first container.

[root@ed6eb2547653 /]# ls -l /proc  | head -n 10 
total 0
dr-xr-xr-x.  9 root root    0 Oct  2 08:18 1
dr-xr-xr-x.  9 root root    0 Oct  2 08:18 21
dr-xr-xr-x.  9 root root    0 Oct  2 08:20 56
dr-xr-xr-x.  9 root root    0 Oct  2 08:21 74
dr-xr-xr-x.  9 root root    0 Oct  2 08:21 75
drwxrwxrwt.  2 root root   40 Oct  2 08:20 acpi
-r--r--r--.  1 root root    0 Oct  2 08:21 buddyinfo
dr-xr-xr-x.  4 root root    0 Oct  2 08:18 bus
-r--r--r--.  1 root root    0 Oct  2 08:21 cgroups

Denied by selinux

cd /proc/21/
bash: cd: /proc/21/: Permission denied

Listing of lables

 ls -lZ /proc  | head -n 10
total 0
dr-xr-xr-x.  9 root root system_u:system_r:container_t:s0:c507,c511         0 Oct  2 08:18 1
dr-xr-xr-x.  9 root root system_u:system_r:container_t:s0:c213,c840         0 Oct  2 08:18 21
dr-xr-xr-x.  9 root root system_u:system_r:container_t:s0:c609,c672         0 Oct  2 08:20 56
dr-xr-xr-x.  9 root root system_u:system_r:container_t:s0:c609,c672         0 Oct  2 08:31 76
dr-xr-xr-x.  9 root root system_u:system_r:container_t:s0:c609,c672         0 Oct  2 08:31 77
drwxrwxrwt.  2 root root system_u:object_r:container_file_t:s0:c609,c672   40 Oct  2 08:20 acpi
-r--r--r--.  1 root root system_u:object_r:proc_t:s0                        0 Oct  2 08:21 buddyinfo
dr-xr-xr-x.  4 root root system_u:object_r:proc_t:s0                        0 Oct  2 08:18 bus
-r--r--r--.  1 root root system_u:object_r:proc_t:s0                        0 Oct  2 08:21 cgroups

Here the selinux audit.log entries from the container host

type=AVC msg=audit(1601626819.734:613): avc:  denied  { search } for  pid=7322 comm="ps" name="21" dev="proc" ino=63561 scontext=system_u:system_r:container_t:s0:c609,c672 tcontext=system_u:system_r:container_t:s0:c213,c840 tclass=dir permissive=0
type=AVC msg=audit(1601627018.327:614): avc:  denied  { search } for  pid=7301 comm="bash" name="1" dev="proc" ino=61484 scontext=system_u:system_r:container_t:s0:c609,c672 tcontext=system_u:system_r:container_t:s0:c507,c511 tclass=dir permissive=0
type=AVC msg=audit(1601627018.327:615): avc:  denied  { search } for  pid=7301 comm="bash" name="1" dev="proc" ino=61484 scontext=system_u:system_r:container_t:s0:c609,c672 tcontext=system_u:system_r:container_t:s0:c507,c511 tclass=dir permissive=0

When starting the containers with --security-opt label=disable i can see all pids of all containers in the pod.

Describe the results you expected:

The pids of every container in the same pod should be visible and i should the able to send a kill signal to process
in an other container sharing the same pod and pid namespace.
This is needed to reload haproxy from a sidecar container.

Additional information you deem important (e.g. issue happens only occasionally):

Output of podman version:

podman version
Version:      2.1.1
API Version:  2.0.0
Go Version:   go1.13.15
Built:        Mon Sep 28 04:08:46 2020
OS/Arch:      linux/amd64

Output of podman info --debug:

host:
  arch: amd64
  buildahVersion: 1.16.1
  cgroupManager: systemd
  cgroupVersion: v1
  conmon:
    package: conmon-2.0.21-1.el8.x86_64
    path: /usr/bin/conmon
    version: 'conmon version 2.0.21, commit: 44dc2e90174f4dcd4040012a62364e7f2564d431-dirty'
  cpus: 4
  distribution:
    distribution: '"centos"'
    version: "8"
  eventLogger: journald
  hostname: gw1.cc1.mynet.at
  idMappings:
    gidmap: null
    uidmap: null
  kernel: 4.18.0-193.19.1.el8_2.x86_64
  linkmode: dynamic
  memFree: 4069076992
  memTotal: 6075191296
  ociRuntime:
    name: runc
    package: runc-1.0.0-145.rc91.git24a3cf8.el8.x86_64
    path: /usr/bin/runc
    version: 'runc version spec: 1.0.2-dev'
  os: linux
  remoteSocket:
    path: /run/podman/podman.sock
  rootless: false
  slirp4netns:
    executable: ""
    package: ""
    version: ""
  swapFree: 2147479552
  swapTotal: 2147479552
  uptime: 18h 7m 9.76s (Approximately 0.75 days)
registries:
  search:
  - registry.fedoraproject.org
  - registry.access.redhat.com
  - registry.centos.org
  - docker.io
store:
  configFile: /etc/containers/storage.conf
  containerStore:
    number: 6
    paused: 0
    running: 6
    stopped: 0
  graphDriverName: overlay
  graphOptions:
    overlay.mountopt: nodev,metacopy=on
  graphRoot: /data/containers/storage
  graphStatus:
    Backing Filesystem: extfs
    Native Overlay Diff: "false"
    Supports d_type: "true"
    Using metacopy: "true"
  imageStore:
    number: 1539
  runRoot: /var/run/containers/storage
  volumePath: /data/containers/storage/volumes
version:
  APIVersion: 2.0.0
  Built: 1601258926
  BuiltTime: Mon Sep 28 04:08:46 2020
  GitCommit: ""
  GoVersion: go1.13.15
  OsArch: linux/amd64
  Version: 2.1.1

Package info (e.g. output of rpm -q podman or apt list podman):

CentOS 8 with podman installed from kubic repo

podman-2.1.1-4.el8.x86_64
container-selinux-2.145.0-1.el8.noarch

Have you tested with the latest version of Podman and have you checked the Podman Troubleshooting Guide?

Yes

Additional environment details (AWS, VirtualBox, physical, etc.):

@openshift-ci-robot openshift-ci-robot added the kind/bug Categorizes issue or PR as related to a bug. label Oct 2, 2020
@vrothberg
Copy link
Member

@rhatdan @giuseppe PTAL

@giuseppe
Copy link
Member

giuseppe commented Oct 2, 2020

should containers in the same pod have the same selinux label?

@ah83
Copy link
Author

ah83 commented Oct 2, 2020

should containers in the same pod have the same selinux label?

Maybe this was the case in previous versions of podman, because in a version of 1.8 i could see the pids of the other container
and use kill to send a reload to haproxy.

@rhatdan
Copy link
Member

rhatdan commented Oct 2, 2020

Yes containers within the same pod are supposed to have the Same SELinux label.

@rhatdan
Copy link
Member

rhatdan commented Oct 2, 2020

Ok this is a big bug.

# podman pod create --name selinux
# podman run --pod selinux alpine cat /proc/self/attr/current
system_u:system_r:container_t:s0:c204,c851sh-5.0
# podman run --pod selinux alpine cat /proc/self/attr/current
system_u:system_r:container_t:s0:c376,c937sh-5.0

@rhatdan
Copy link
Member

rhatdan commented Oct 2, 2020

podman pod create --name selinux --share pid,ipc; podman run --pod selinux alpine cat /proc/self/attr/current; echo; podman run --pod selinux alpine cat /proc/self/attr/current; echo; podman pod rm selinux -f
cb7c40bb79b0998adadbeae30d3adbe051d4c1e9f0cc2ec866cd9dbff49ccc0b
system_u:system_r:container_t:s0:c425,c648
system_u:system_r:container_t:s0:c615,c936
cb7c40bb79b0998adadbeae30d3adbe051d4c1e9f0cc2ec866cd9dbff49ccc0b

@rhatdan rhatdan self-assigned this Oct 2, 2020
@rhatdan rhatdan added the In Progress This issue is actively being worked by the assignee, please do not work on this at this time. label Oct 2, 2020
@github-actions github-actions bot added the locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. label Sep 22, 2023
@github-actions github-actions bot locked as resolved and limited conversation to collaborators Sep 22, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
In Progress This issue is actively being worked by the assignee, please do not work on this at this time. kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

5 participants