Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Handling firewalld reload #230

Closed
mskarbek opened this issue Feb 19, 2022 · 19 comments
Closed

Handling firewalld reload #230

mskarbek opened this issue Feb 19, 2022 · 19 comments
Assignees

Comments

@mskarbek
Copy link

mskarbek commented Feb 19, 2022

Create a container with --publish=80:80. You will get a set of chains/rules in nft ip nat and ip filter tables which are, obviously, separate from firewalld tables. Issue firewall-cmd --reload and you have lost all communication with that container. Stopping container results with:

ERRO[0017] Unable to cleanup network for container 374e58854715b314dbb22d5a1f1057fc8b6571899e2c2e52dd000502853dc7aa: "error tearing down network namespace configuration for container 374e58854715b314dbb22d5a1f1057fc8b6571899e2c2e52dd000502853dc7aa: netavark: code: 1, msg: iptables: No chain/target/match by that name.\n" 

Is there a way to track nft chains and fix them during container lifetime that can be incorporated into netavark?

Versions:

# cat /etc/redhat-release 
CentOS Stream release 9
# rpm -q netavark podman
netavark-1.0.0-1.el9.x86_64
podman-4.0.0-9.el9.x86_64

Used repo: COPR rhcontainerbot/podman4

@baude
Copy link
Member

baude commented Feb 19, 2022

I know this is something we discussed when putting netavark together. It is on our list of things to look at for certain. @mheon PTAL

@mskarbek
Copy link
Author

Also, it is interesting why netavark chose to use iptables not firewalld backed when firewalld is running? With firewalld backend, netavark could rely on firewalld/firewalld#483 to restore all missing rules.

@Luap99
Copy link
Member

Luap99 commented Feb 19, 2022

@mskarbek Please see containers/podman#5431
Also the firewalld backend will not solve this problem since firewalld flushes its own rules as well unless you make them permanent which has bigger problem with leaking stuff after reboot etc...

@mskarbek
Copy link
Author

@Luap99 that is why I pointed at d-bus signal. netavark needs to have its own state and compare it with current firewall state after handling reload signal from firewalld.

@mheon
Copy link
Member

mheon commented Feb 20, 2022

Listener is definitely the answer, but the question then becomes: where do we put it?

We can't put it in Netavark; Netavark exits immediately after the network is configured.

We can't put it in Aardvark; Aardvark spins down when no containers are using it, and some networks (notably the default one!) don't use it.

Conmon seems like it could be logical, but we'd only want one Conmon process to fire the reload command, and we have 1 conmon per container. Conmon's Rust rewrite offers a potential opportunity to add enough intelligence that this could be viable?

We could also write a super-minimal binary with an associated systemd service that would always be running and listening.

@Luap99
Copy link
Member

Luap99 commented Feb 20, 2022

@mheon Well we could also call podman network reload containerID from conmon. In this case every conmon would need to listen on dbus for the reload event. Thinking about it, using conmon is better then other options because it already has the correct --root and --runroot arguments so it could also handle containers in non standard locations.

@baude
Copy link
Member

baude commented Feb 20, 2022

Also, it is interesting why netavark chose to use iptables not firewalld backed when firewalld is running? With firewalld backend, netavark could rely on firewalld/firewalld#483 to restore all missing rules.

This is because @mheon was using their new dbus interface and it was/is not complete yet, so we had to follow what was done in the past with CNI -- to use both. The intent is to back out the iptables stuff for firewalld as soon as the dbus code is complete ... AND ...into distributions.

@mheon
Copy link
Member

mheon commented Feb 20, 2022

Yeah - firewalld is disabled until the v1.1.0 upstream release, due to a few missing features that have been added, but have not yet made it into a release. Once that happens we can reenable the firewalld backend conditional on firewalld v1.1.0 or higher being available.

@mheon
Copy link
Member

mheon commented Feb 21, 2022

@Luap99 Thinking about that more - downside is that we get 1 podman network reload per running container, so we could potentially burst out 100 separate podman processes when firewalld upgrades - could be a real strain on system resources.

@rhatdan
Copy link
Member

rhatdan commented Sep 14, 2022

Any update on this?

@mheon
Copy link
Member

mheon commented Sep 14, 2022

We can discuss this further at the F2F - basically, we need to locate a dbus listener somewhere in our code, but Netavark doesn't have a daemon to host one, so we either put it in Aardvark, or potentially Conmon-rs.

@SecT0uch
Copy link

Using: NETAVARK_FW="firewalld" podman run <image> I get netavark: Error retrieving dbus connection for requested firewall backend: DBus error: I/O error: No such file or directory (os error 2).

Is netavark currently supposed to work with firewalld ?

I see this comment in the code while firewalld 1.2.1 is out.

@Luap99
Copy link
Member

Luap99 commented Nov 22, 2022

@SecT0uch Please create a new issue or discussion, this is not related to the issue.

@Luap99
Copy link
Member

Luap99 commented Nov 27, 2023

This was fixed in #840, in netavark v1.9.

See https://blog.podman.io/2023/11/new-netavark-firewalld-reload-service/ for info on how to use it.

@Luap99 Luap99 closed this as completed Nov 27, 2023
@mskarbek
Copy link
Author

Now we only need quickly propagate 1.9 to the RHEL. ;)

@Luap99
Copy link
Member

Luap99 commented Nov 27, 2023

I would assume it will be part of 9.4/8.10 in ~6 months.

@skoppe
Copy link

skoppe commented Jun 7, 2024

This was fixed in #840, in netavark v1.9.

See https://blog.podman.io/2023/11/new-netavark-firewalld-reload-service/ for info on how to use it.

Any ideas whether it can support nftables? Right now we override the systemd service to add a call to podman network reload --all. I much rather have a service do that for me.

@Luap99
Copy link
Member

Luap99 commented Jun 7, 2024

Any ideas whether it can support nftables?

What exactly do you mean? Using the nftables firewall driver in netavark? In this case yes.

If you mean when you flush your nftables ruleset then no the service is only setup to listen on the firewalld event.
However it should be simple to add a new "oneshot" command to add the rules back like the firewalld-reload service.
So in this case feel free to file a new RFE for that.

@skoppe
Copy link

skoppe commented Jun 7, 2024

I was referring to the flush issue yes. Thanks, will look into that.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

7 participants