Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

container can't connect to outer IPs (routing issue?) #13153

Closed
HidingCherry opened this issue Feb 6, 2022 · 15 comments · Fixed by #13159
Closed

container can't connect to outer IPs (routing issue?) #13153

HidingCherry opened this issue Feb 6, 2022 · 15 comments · Fixed by #13159
Labels
kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. network Networking related issue or feature

Comments

@HidingCherry
Copy link

HidingCherry commented Feb 6, 2022

Is this a BUG REPORT or FEATURE REQUEST? (leave only one on its own line)

/kind bug

Description

I have no idea what I did wrong...
Whatever (outer) IP I try to connect with -> Network is unreachable

Steps to reproduce the issue:

  1. podman network create nextcloud-pub

  2. podman create --replace --name=nextcloud --net nextcloud-pub docker.io/library/nextcloud:22-apache
    2.1 podman start nextcloud

  3. podman exec -t nextcloud curl 8.8.8.8

Describe the results you received:

curl: (7) Failed to connect to 8.8.8.8 port 80: Network is unreachable

Describe the results you expected:
IP should be reachable

Additional information you deem important (e.g. issue happens only occasionally):
container network part:

          "NetworkSettings": {
               "EndpointID": "",
               "Gateway": "",
               "IPAddress": "",
               "IPPrefixLen": 0,
               "IPv6Gateway": "",
               "GlobalIPv6Address": "",
               "GlobalIPv6PrefixLen": 0,
               "MacAddress": "",
               "Bridge": "",
               "SandboxID": "",
               "HairpinMode": false,
               "LinkLocalIPv6Address": "",
               "LinkLocalIPv6PrefixLen": 0,
               "Ports": {
                    "80/tcp": null
               },
               "SandboxKey": "/run/user/1002/netns/netns-80097301-a212-a3a9-4310-46cbbfbde481",
               "Networks": {
                    "nextcloud-pub": {
                         "EndpointID": "",
                         "Gateway": "10.89.8.1",
                         "IPAddress": "10.89.8.3",
                         "IPPrefixLen": 24,
                         "IPv6Gateway": "",
                         "GlobalIPv6Address": "",
                         "GlobalIPv6PrefixLen": 0,
                         "MacAddress": "72:37:35:a9:ba:c3",
                         "NetworkID": "nextcloud-pub",
                         "DriverOpts": null,
                         "IPAMConfig": null,
                         "Links": null,
                         "Aliases": [
                              "b7715373bd99"
                         ]
                    }
               }
          },

$ podman inspect nextcloud-pub

[
     {
          "name": "nextcloud-pub",
          "id": "274f5efed57690ccaf30874e2e7e28118e6eaf2d3aa6b9924eb5ca0a24caba6d",
          "driver": "bridge",
          "network_interface": "cni-podman9",
          "created": "2022-02-04T20:48:10.425385931+01:00",
          "subnets": [
               {
                    "subnet": "10.89.8.0/24",
                    "gateway": "10.89.8.1"
               }
          ],
          "ipv6_enabled": false,
          "internal": false,
          "dns_enabled": false,
          "ipam_options": {
               "driver": "host-local"
          }
     }
]

Output of podman version:

Client:       Podman Engine
Version:      4.0.0-dev
API Version:  4.0.0-dev
Go Version:   go1.17.6
Git Commit:   ab4af502b3f60f891192356eddaa13092f785612
Built:        Sun Feb  6 22:00:57 2022
OS/Arch:      linux/amd64

Output of podman info --debug:

host:
  arch: amd64
  buildahVersion: 1.24.1
  cgroupControllers:
  - memory
  - pids
  cgroupManager: systemd
  cgroupVersion: v2
  conmon:
    package: /usr/bin/conmon is owned by conmon 1:2.1.0-1
    path: /usr/bin/conmon
    version: 'conmon version 2.1.0, commit: bdb4f6e56cd193d40b75ffc9725d4b74a18cb33c'
  cpus: 4
  distribution:
    distribution: arch
    version: unknown
  eventLogger: journald
  hostname: xxx
  idMappings:
    gidmap:
    - container_id: 0
      host_id: 1002
      size: 1
    - container_id: 1
      host_id: 165536
      size: 65536
    uidmap:
    - container_id: 0
      host_id: 1002
      size: 1
    - container_id: 1
      host_id: 165536
      size: 65536
  kernel: 5.15.15-hardened1-1-hardened
  linkmode: dynamic
  logDriver: journald
  memFree: 132042752
  memTotal: 4035612672
  networkBackend: cni
  ociRuntime:
    name: crun
    package: /usr/bin/crun is owned by crun 1.4.2-1
    path: /usr/bin/crun
    version: |-
      crun version 1.4.2
      commit: f6fbc8f840df1a414f31a60953ae514fa497c748
      spec: 1.0.0
      +SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +YAJL
  os: linux
  remoteSocket:
    exists: true
    path: /run/user/1002/podman/podman.sock
  security:
    apparmorEnabled: false
    capabilities: CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
    rootless: true
    seccompEnabled: true
    seccompProfilePath: /etc/containers/seccomp.json
    selinuxEnabled: false
  serviceIsRemote: false
  slirp4netns:
    executable: /sbin/slirp4netns
    package: /usr/bin/slirp4netns is owned by slirp4netns 1.1.12-1
    version: |-
      slirp4netns version 1.1.12
      commit: 7a104a101aa3278a2152351a082a6df71f57c9a3
      libslirp: 4.6.1
      SLIRP_CONFIG_VERSION_MAX: 3
      libseccomp: 2.5.3
  swapFree: 0
  swapTotal: 0
  uptime: 56h 7m 16.49s (Approximately 2.33 days)
plugins:
  log:
  - k8s-file
  - none
  - passthrough
  - journald
  network:
  - bridge
  - macvlan
  - ipvlan
  volume:
  - local
registries:
  search:
  - ghcr.io
  - quay.io
  - docker.io
store:
  configFile: /home/pods/.config/containers/storage.conf
  containerStore:
    number: 5
    paused: 0
    running: 5
    stopped: 0
  graphDriverName: btrfs
  graphOptions: {}
  graphRoot: /home/pods/.local/share/containers/storage
  graphStatus:
    Build Version: 'Btrfs v5.16 '
    Library Version: "102"
  imageCopyTmpDir: /var/tmp
  imageStore:
    number: 4
  runRoot: /run/user/1002/containers
  volumePath: /home/pods/.local/share/containers/storage/volumes
version:
  APIVersion: 4.0.0-dev
  Built: 1644181257
  BuiltTime: Sun Feb  6 22:00:57 2022
  GitCommit: ab4af502b3f60f891192356eddaa13092f785612
  GoVersion: go1.17.6
  OsArch: linux/amd64
  Version: 4.0.0-dev

Package info (e.g. output of rpm -q podman or apt list podman):

$ pacman -Qs podman-git
local/podman-git 4.0.0_dev.r14227.gab4af502b-1
    Tool and library for running OCI-based containers in pods (git)

Have you tested with the latest version of Podman and have you checked the Podman Troubleshooting Guide? (https://github.com/containers/podman/blob/main/troubleshooting.md)

Yes

Additional environment details (AWS, VirtualBox, physical, etc.):
physical

@openshift-ci openshift-ci bot added the kind/bug Categorizes issue or PR as related to a bug. label Feb 6, 2022
@Luap99
Copy link
Member

Luap99 commented Feb 7, 2022

Can you share the output of podman unshare --rootless-netns ip addr. If you do not have a tap0 interface in the output please reboot and try again.

@Luap99 Luap99 added the network Networking related issue or feature label Feb 7, 2022
@HidingCherry
Copy link
Author

Thanks - I guess there was something wrong with slirp4netns.
tap0 was not there, so I stopped all container.
While stopping it gave an error, slirp4netns process could not be found.
Using top - I indeed couldn't see such a process.

After starting (without reboot) all containers, I rechecked and tap0 was there, outer communication also worked again.

I guess this was some kind of user error - I begin to use systemd for some container, maybe there was some conflict/issue earlier. At that time slirp4netns used nearly 1GB or RAM - I stopped my systemd container but I guess something was half wrong.

Closing this, as it is solved and not easily (if at all) reproducible.

@Luap99
Copy link
Member

Luap99 commented Feb 7, 2022

Oh, you use systemd user units? If so I know how to reproduce, this is a real bug.

@Luap99 Luap99 reopened this Feb 7, 2022
@HidingCherry
Copy link
Author

I use one systemd user unit - for postgres for now.
Although, I stopped the container-postgres unit before doing anything.

I have had the issue again, so I'll need to do some testing.

@Luap99
Copy link
Member

Luap99 commented Feb 7, 2022

It happens if you start the container via systemd. Podman will launch the slirp4netns process and systemd will keep that process in the unit cgroup. So when when you stop that systemd unit, sytemd will kill all processes in that cgroup. This is wrong since the slirp4netns process should live as long as a any container with networks is running.

We have to fix this in podman so that we move the slirp4netns process in a separate cgroup so that systemd does not kill it.

@HidingCherry
Copy link
Author

systemd user unit is stopped and disabled, for extensive testing.

After a reboot (and kernel upgrade), the issue is still consistent.
I have 2 slirp4netns processes running.
I have 5 containers.

1 of them have port_handler=slirp4netns ("external" reverse proxy - for real IPs)
3 of them have at least one open port on the host ("internal" reverse proxy and two more container)

Maybe the port_handler=slirp4netns is the issue?
You wrote in the other issue that it is not ready to use?

$ podman unshare --rootless-netns ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: tap0: <BROADCAST,UP,LOWER_UP> mtu 65520 qdisc fq_codel state UNKNOWN group default qlen 1000
    link/ether f6:99:fd:3c:23:e3 brd ff:ff:ff:ff:ff:ff
    inet 10.0.2.100/24 brd 10.0.2.255 scope global tap0
       valid_lft forever preferred_lft forever
    inet6 fd00::f499:fdff:fe3c:23e3/64 scope global dynamic mngtmpaddr 
       valid_lft 86340sec preferred_lft 14340sec
    inet6 fe80::f499:fdff:fe3c:23e3/64 scope link 
       valid_lft forever preferred_lft forever
3: cni-podman2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether a6:ee:7a:6b:fa:de brd ff:ff:ff:ff:ff:ff
    inet6 fe80::3cb9:41ff:feb5:a7d0/64 scope link 
       valid_lft forever preferred_lft forever
5: cni-podman10: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 76:1b:bb:9b:8b:d5 brd ff:ff:ff:ff:ff:ff
    inet 10.89.9.1/24 brd 10.89.9.255 scope global cni-podman10
       valid_lft forever preferred_lft forever
    inet6 fe80::741b:bbff:fe9b:8bd5/64 scope link 
       valid_lft forever preferred_lft forever
7: cni-podman9: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 9a:ae:53:79:3a:d1 brd ff:ff:ff:ff:ff:ff
    inet 10.89.8.1/24 brd 10.89.8.255 scope global cni-podman9
       valid_lft forever preferred_lft forever
    inet6 fe80::98ae:53ff:fe79:3ad1/64 scope link 
       valid_lft forever preferred_lft forever
8: veth40541b01@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master cni-podman9 state UP group default 
    link/ether b6:d2:e8:26:e4:7c brd ff:ff:ff:ff:ff:ff link-netnsid 1
    inet6 fe80::b4d2:e8ff:fe26:e47c/64 scope link 
       valid_lft forever preferred_lft forever
9: cni-podman1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 1e:e1:5b:49:2d:4b brd ff:ff:ff:ff:ff:ff
    inet6 fe80::c468:2cff:fed9:feed/64 scope link 
       valid_lft forever preferred_lft forever
10: vethdc44e9a4@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master cni-podman1 state UP group default 
    link/ether 1e:e1:5b:49:2d:4b brd ff:ff:ff:ff:ff:ff link-netnsid 1
    inet6 fe80::1ce1:5bff:fe49:2d4b/64 scope link 
       valid_lft forever preferred_lft forever
11: veth501ab2bc@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master cni-podman10 state UP group default 
    link/ether 0a:bf:0e:ca:de:df brd ff:ff:ff:ff:ff:ff link-netnsid 2
    inet6 fe80::8bf:eff:feca:dedf/64 scope link 
       valid_lft forever preferred_lft forever
12: cni-podman7: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 7a:a7:ac:1d:6b:20 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::e815:cdff:fe9e:d41d/64 scope link 
       valid_lft forever preferred_lft forever
13: vethef3df17b@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master cni-podman7 state UP group default 
    link/ether 7a:a7:ac:1d:6b:20 brd ff:ff:ff:ff:ff:ff link-netnsid 2
    inet6 fe80::78a7:acff:fe1d:6b20/64 scope link 
       valid_lft forever preferred_lft forever
14: cni-podman8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether d6:8c:36:7b:48:9a brd ff:ff:ff:ff:ff:ff
    inet6 fe80::e4ec:fdff:fe1a:4509/64 scope link 
       valid_lft forever preferred_lft forever
15: vethdbbc2cc1@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master cni-podman8 state UP group default 
    link/ether d6:8c:36:7b:48:9a brd ff:ff:ff:ff:ff:ff link-netnsid 2
    inet6 fe80::d48c:36ff:fe7b:489a/64 scope link 
       valid_lft forever preferred_lft forever
16: vethaa740482@if5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master cni-podman9 state UP group default 
    link/ether 56:bf:fd:46:0a:ee brd ff:ff:ff:ff:ff:ff link-netnsid 2
    inet6 fe80::54bf:fdff:fe46:aee/64 scope link 
       valid_lft forever preferred_lft forever
17: cni-podman3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether ba:70:17:4c:69:e2 brd ff:ff:ff:ff:ff:ff
    inet 10.89.2.1/24 brd 10.89.2.255 scope global cni-podman3
       valid_lft forever preferred_lft forever
    inet6 fe80::b870:17ff:fe4c:69e2/64 scope link 
       valid_lft forever preferred_lft forever
18: veth4b62e7cb@if6: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master cni-podman3 state UP group default 
    link/ether fe:3c:4a:25:73:35 brd ff:ff:ff:ff:ff:ff link-netnsid 2
    inet6 fe80::fc3c:4aff:fe25:7335/64 scope link 
       valid_lft forever preferred_lft forever
19: veth2639f26a@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master cni-podman2 state UP group default 
    link/ether a6:ee:7a:6b:fa:de brd ff:ff:ff:ff:ff:ff link-netnsid 3
    inet6 fe80::a4ee:7aff:fe6b:fade/64 scope link 
       valid_lft forever preferred_lft forever
20: vethfba8341f@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master cni-podman1 state UP group default 
    link/ether 6a:32:e1:00:d9:a7 brd ff:ff:ff:ff:ff:ff link-netnsid 3
    inet6 fe80::6832:e1ff:fe00:d9a7/64 scope link 
       valid_lft forever preferred_lft forever

@Luap99
Copy link
Member

Luap99 commented Feb 7, 2022

It looks like you still have the slirp4netns process running here so you should not have any connection problems.

2: tap0: <BROADCAST,UP,LOWER_UP> mtu 65520 qdisc fq_codel state UNKNOWN group default qlen 1000
    link/ether f6:99:fd:3c:23:e3 brd ff:ff:ff:ff:ff:ff
    inet 10.0.2.100/24 brd 10.0.2.255 scope global tap0
       valid_lft forever preferred_lft forever
    inet6 fd00::f499:fdff:fe3c:23e3/64 scope global dynamic mngtmpaddr 
       valid_lft 86340sec preferred_lft 14340sec
    inet6 fe80::f499:fdff:fe3c:23e3/64 scope link 
       valid_lft forever preferred_lft forever

@HidingCherry
Copy link
Author

I've found the problem - there should be a workaround I don't completely like.

There were indeed two issues here, the one at the beginning - no tap0 device (which is fixed).

Now I recontructed my original container:

podman create \
--replace \
--name=nextcloud \
\
--net nextcloud-pub:ip=10.89.8.3 \
--add-host=reverse-proxy:10.89.8.2 \
\
--net pg-nextcloud:ip=10.89.0.3 \
--add-host=postgres:10.89.0.2 \
\
docker.io/library/nextcloud:22-apache

(I've put the volumes and timezone out of the parameters.)

The issue is, pg-nextcloud is internal - so no way to phone out.
As soon as the nextcloud container is in that network, its route is probably wrong.

I would need some gateway parameter (which I miss) to correct that.
I also haven't found the containers config file (like for cni networks), to manually enter a gateway.

$ podman inspect pg-nextcloud 
[
     {
          "name": "pg-nextcloud",
          "id": "3f9abbd86ad9ad1a7dde214c19946db6861063dfc91754d36349756d2fc42532",
          "driver": "bridge",
          "network_interface": "cni-podman1",
          "created": "2022-02-04T18:39:02.615452359+01:00",
          "subnets": [
               {
                    "subnet": "10.89.0.0/24"
               }
          ],
          "ipv6_enabled": false,
          "internal": true,
          "dns_enabled": false,
          "ipam_options": {
               "driver": "host-local"
          }
     }
]

@Luap99
Copy link
Member

Luap99 commented Feb 7, 2022

OK I see the other issue, we are adding a default route for the internal network. This is obviously wrong.

Luap99 added a commit to Luap99/common that referenced this issue Feb 7, 2022
Since a internal network has no connectivity to the outside we should
not add a default route. Also make sure to not add the default route
more than once for ipv4/ipv6.

Ref containers/podman#13153

Signed-off-by: Paul Holzinger <pholzing@redhat.com>
Luap99 added a commit to Luap99/libpod that referenced this issue Feb 7, 2022
When running podman inside systemd user units, it is possible that
systemd kills the rootless netns slirp4netns process because it was
started in the default unit cgroup. When the unit is stopped all
processes in that cgroup are killed. Since the slirp4netns process is
run once for all containers it should not be killed. To make sure
systemd will not kill the process we move it to the user.slice.

Fixes containers#13153

Signed-off-by: Paul Holzinger <pholzing@redhat.com>
@HidingCherry
Copy link
Author

I would suggest to add a parameter to select one specific network for default-route (e.g. --gateway=nextcloud or --default-route=nextcloud/IP). Maybe another issue would be better for this?

Some of my containers will have multiple networks without the internal flag.
This might conflict with the publish option, if it uses a specific network for that.

I haven't run into issues yet, which would require this (migration is ongoing - but halted because of this issue).


Another weird thing, my synapse container does not have the routing issue, so I'm a bit irritated of how the issue appears and when routes are added to a container.
synapse-container also has pg-synapse and synapse-pub

@Luap99
Copy link
Member

Luap99 commented Feb 7, 2022

This is a problem with the cni configs, internal networks should not have the default route set, see containers/common#920
I do not think we should add a option to change the gateway, if you connect to two networks it should be able to user either one of them as gateway.

Some container might not run into this because the ordering how the networks are setup is not deterministic. I think the linked PR should fix your issue. You can test this by manually deleting the 0.0.0.0/0 route from the internal cni config files in ~/.config/cni/net.d/

@HidingCherry
Copy link
Author

HidingCherry commented Feb 7, 2022

Yes - I have deleted the route (of both pg networks) and reloaded the network for both container - nextcloud works again - thanks for the quick fix for that.

@HidingCherry
Copy link
Author

Thanks for those fixes!

Just two things you might want to consider, I just mention them.

For reproducible issues you might want to add some kind of network priority.

  • add networks in abc-order
  • add single route

In case one has routing issues or other network issues, but it is not reproducible in a consistent way.
Single route in case one network has some weird stuff going on, the other one does not. The issue would be kinda random, which would be hard to find. A workaround would be to have a user-configurable route.

@Luap99
Copy link
Member

Luap99 commented Feb 9, 2022

The reason there is a route for each network is because how it interacts with podman network connect/disconnected. When you call disconnect on the network with the default route it would break network connectivity since the other network does not have a default route set.
Deterministic ordering is tracked here: #12850

@HidingCherry
Copy link
Author

Ah yes, understandable.
Thanks for clarification.

mheon pushed a commit to mheon/libpod that referenced this issue Feb 10, 2022
When running podman inside systemd user units, it is possible that
systemd kills the rootless netns slirp4netns process because it was
started in the default unit cgroup. When the unit is stopped all
processes in that cgroup are killed. Since the slirp4netns process is
run once for all containers it should not be killed. To make sure
systemd will not kill the process we move it to the user.slice.

Fixes containers#13153

Signed-off-by: Paul Holzinger <pholzing@redhat.com>
patrycja-guzik pushed a commit to patrycja-guzik/podman that referenced this issue Feb 15, 2022
When running podman inside systemd user units, it is possible that
systemd kills the rootless netns slirp4netns process because it was
started in the default unit cgroup. When the unit is stopped all
processes in that cgroup are killed. Since the slirp4netns process is
run once for all containers it should not be killed. To make sure
systemd will not kill the process we move it to the user.slice.

Fixes containers#13153

Signed-off-by: Paul Holzinger <pholzing@redhat.com>
@github-actions github-actions bot added the locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. label Sep 21, 2023
@github-actions github-actions bot locked as resolved and limited conversation to collaborators Sep 21, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. network Networking related issue or feature
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants