Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fixes #30803: Bind to socket for Puma and Apache #883

Merged
merged 1 commit into from
Nov 24, 2020

Conversation

ehelms
Copy link
Member

@ehelms ehelms commented Sep 8, 2020

The goal of this change is to switch to using local unix sockets to bind Puma to, and also to have Apache reverse proxy to increasing the security of the communication from the reverse proxy to the socket.

I am looking to understand if I need to enforce permissions and user-group on the socket to enhance security. There is also currently SELinux denials that are associated with this change we will need to account for. On my test system they are:

type=AVC msg=audit(1599591946.533:10070): avc:  denied  { connectto } for  pid=6405 comm="httpd" path="/run/puma.sock" scontext=system_u:system_r:httpd_t:s0 tcontext=system_u:system_r:foreman_rails_t:s0 tclass=unix_stream_socket

SELinux policy changes needed to address -- theforeman/foreman-selinux#113

@ehelms
Copy link
Member Author

ehelms commented Sep 8, 2020

Combines with theforeman/foreman#7972

@ehelms
Copy link
Member Author

ehelms commented Sep 8, 2020

Note: I expect tests to fail but I want feedback on the approach before I do a bunch of updates.

manifests/config.pp Outdated Show resolved Hide resolved
Copy link
Member

@ekohl ekohl left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since the current implementation depends on a Foreman change, we'll need to be careful about compatibility. Right now the module is compatible with Foreman 2.0, as specified in README.md. We want to support current stable + development.

Ideally we'd write any Foreman patch in such a way that it's compatible with both the old FOREMAN_BIND (without a protocol + port) and the new (with). Then cherry pick that into a stable branch. In the compatibility notes we'd then write that you need at least Foreman 2.1.x (2.1.2 at the time of writing?). I'd also like to cherry pick this specific patch to 2.1 when it's finished.

manifests/init.pp Outdated Show resolved Hide resolved
manifests/config.pp Outdated Show resolved Hide resolved
@ehelms ehelms force-pushed the fixes-30803 branch 2 times, most recently from 80ba3dc to e5c2e03 Compare September 10, 2020 12:14
manifests/config/apache.pp Outdated Show resolved Hide resolved
@ehelms
Copy link
Member Author

ehelms commented Sep 10, 2020

Unit tests are now passing but the acceptance tests won't pass until theforeman/foreman#7972 is available as an RPM

manifests/config/apache.pp Outdated Show resolved Hide resolved
@ekohl
Copy link
Member

ekohl commented Oct 20, 2020

Given this does not happen on the other OSes I am wondering if this is a difference in systemd versions causing this.

Unintentionally I just reproduced this on Fedora 32. I had a foreman.socket and a foreman.service which were both running. Then a systemctl restart foreman.socket triggered it as well.

@ehelms
Copy link
Member Author

ehelms commented Oct 20, 2020

restart

Was it running localhost:3000 and then switched to Unix Socket? Or it was simply running and you restart foreman.socket and had it blow up?

@ekohl
Copy link
Member

ekohl commented Oct 20, 2020

From my journal:

Oct 20 21:02:47 wisse systemd[1267]: Starting Foreman...
Oct 20 21:02:48 wisse foreman[746290]: => Booting Puma
Oct 20 21:02:48 wisse foreman[746290]: => Rails 6.0.3.3 application starting in production
Oct 20 21:02:48 wisse foreman[746290]: => Run `rails server --help` for more startup options
Oct 20 21:02:49 wisse foreman[746290]: API controllers newer than Apipie cache! Run apipie:cache rake task to regenerate cache.
Oct 20 21:02:52 wisse foreman[746290]: Failed to load systemd plugin
Oct 20 21:02:52 wisse foreman[746290]: ["tcp://0.0.0.0:5000"]
Oct 20 21:02:52 wisse foreman[746290]: [746290] Puma starting in cluster mode...
Oct 20 21:02:52 wisse foreman[746290]: [746290] * Version 5.0.2 (ruby 2.7.2-p137), codename: Spoony Bard
Oct 20 21:02:52 wisse foreman[746290]: [746290] * Min threads: 0, max threads: 16
Oct 20 21:02:52 wisse foreman[746290]: [746290] * Environment: production
Oct 20 21:02:52 wisse foreman[746290]: [746290] * Process workers: 2
Oct 20 21:02:52 wisse foreman[746290]: [746290] * Preloading application
Oct 20 21:02:52 wisse foreman[746290]: [746290] * Closing unused activated socket: tcp:0.0.0.0:5000
Oct 20 21:02:52 wisse foreman[746290]: [746290] Use Ctrl-C to stop
Oct 20 21:02:52 wisse foreman[746290]: [746290] - Worker 0 (pid: 746296) booted, phase: 0
Oct 20 21:02:52 wisse foreman[746290]: [746290] - Worker 1 (pid: 746300) booted, phase: 0
Oct 20 21:05:20 wisse systemd[1267]: foreman.socket: Succeeded.
Oct 20 21:05:20 wisse systemd[1267]: Closed Foreman HTTP Server Accept Sockets.
Oct 20 21:05:20 wisse systemd[1267]: Stopping Foreman HTTP Server Accept Sockets.
Oct 20 21:05:20 wisse systemd[1267]: foreman.socket: Socket service foreman.service already active, refusing.
Oct 20 21:05:20 wisse systemd[1267]: Failed to listen on Foreman HTTP Server Accept Sockets.
Oct 20 21:05:20 wisse systemd[1267]: Dependency failed for Foreman.
Oct 20 21:05:20 wisse systemd[1267]: foreman.service: Job foreman.service/start failed with result 'dependency'.
Oct 20 21:05:25 wisse foreman[746290]: [746290] - Gracefully shutting down workers...
Oct 20 21:05:26 wisse foreman[746290]: Exiting
Oct 20 21:05:26 wisse systemd[1267]: foreman.service: Succeeded.

I was working on a different patch, but I think what's similar is that a file descriptor was open for the correct bind (0.0.0.0:5000) but puma didn't consume it. Perhaps you're hitting a similar thing where one is restarted before the other was really reloaded/restarted.

@ehelms
Copy link
Member Author

ehelms commented Oct 21, 2020

I think, from some testing, the fundamental problem I am running into is that foreman.service is being activated by the packaging and this seems to be exacerbated on the Debian side. Given the foreman.socket has control of activating the service, I think we need to move away from the service being activated to focusing more on the socket. I did some searching I am not sure where this happens in Debian packaging. @ekohl @evgeni can one of you point me in the direction of this?

@evgeni
Copy link
Member

evgeni commented Oct 22, 2020

@evgeni
Copy link
Member

evgeni commented Oct 22, 2020

However, I don't exactly understand what the problem with the service vs the socket is.
The service can bind the port itself, or get activated by foreman.socket, both are valid scenarios and starting the socket while the service is running should be effectively a noop?

@evgeni
Copy link
Member

evgeni commented Oct 22, 2020

Maybe we want dh_systemd_start --no-start? https://manpages.debian.org/testing/debhelper/dh_systemd_start.1.en.html

@ehelms
Copy link
Member Author

ehelms commented Oct 22, 2020

So, I think what happens is:

  • foreman package installs
  • triggers activation of foreman.service running with a bind to host:port
  • puppet runs configuring the foreman.socket
  • puppet attempts to start the foreman.socket which tries to activate the foreman.service which is already activated with a different bind
  • puppet fails

The annoying part is, on Debian 10 the above happens if the foreman.service Service requires foreman.socket Service. If I drop the requires, then this fails on Ubuntu 1804. This never fails on EL7.

@ekohl
Copy link
Member

ekohl commented Oct 22, 2020

I'm starting to wonder if we should wait for Puma 5.1.0 which should include puma/puma#2362. That will simplify the configuration required. However, it may complicate support for older versions.

@ehelms
Copy link
Member Author

ehelms commented Oct 22, 2020

I think Puma 5.1.0 will clean things up but @ekohl you have expressed the hardening this provides and I think that's a good thing to have. I am less familiar in this area, but if I look at how EL and Debian states are post install, pre-installer run, we have variation there and I think we'd benefit from commonality:

On EL7, both foreman.service and foreman.socket come installed with foreman-service package. Both are installed from packaging as loaded and inactive:

[root@pipeline-foreman-server-nightly-centos7 vagrant]# yum install foreman-service -y
[root@pipeline-foreman-server-nightly-centos7 vagrant]# systemctl status foreman.service
● foreman.service - Foreman
   Loaded: loaded (/usr/lib/systemd/system/foreman.service; disabled; vendor preset: disabled)
   Active: inactive (dead)
     Docs: https://theforeman.org
[root@pipeline-foreman-server-nightly-centos7 vagrant]# systemctl status foreman.socket
● foreman.socket - Foreman HTTP Server Accept Sockets
   Loaded: loaded (/usr/lib/systemd/system/foreman.socket; disabled; vendor preset: disabled)
   Active: inactive (dead)
   Listen: 0.0.0.0:3000 (Stream)

On Debian 10 and Ubuntu 18.04, the foreman.service comes with the install of foreman, and the foreman.socket comes with the install of foreman-service package. The foreman.service is loaded and active, the foreman.socket is loaded and active :

root@pipeline-foreman-server-nightly-debian10:/home/vagrant# systemctl status foreman.service
● foreman.service - Foreman
   Loaded: loaded (/lib/systemd/system/foreman.service; enabled; vendor preset: enabled)
   Active: active (exited) since Thu 2020-10-22 12:28:17 UTC; 3min 21s ago
     Docs: https://theforeman.org
    Tasks: 0 (limit: 4915)
   Memory: 0B
   CGroup: /system.slice/foreman.service

Oct 22 12:28:17 pipeline-foreman-server-nightly-debian10 systemd[1]: Starting foreman.service...
Oct 22 12:28:17 pipeline-foreman-server-nightly-debian10 foreman[31396]: foreman not configured to start. Please edit /etc/default/foreman to enable.
Oct 22 12:28:17 pipeline-foreman-server-nightly-debian10 systemd[1]: Started foreman.service.
Oct 22 12:29:44 pipeline-foreman-server-nightly-debian10 systemd[1]: Dependency failed for Foreman.
Oct 22 12:29:44 pipeline-foreman-server-nightly-debian10 systemd[1]: foreman.service: Job foreman.service/start failed with result 'dependency'.
root@pipeline-foreman-server-nightly-debian10:/home/vagrant# systemctl status foreman.socket
● foreman.socket - Foreman HTTP Server Accept Sockets
   Loaded: loaded (/lib/systemd/system/foreman.socket; enabled; vendor preset: enabled)
   Active: inactive (dead)
   Listen: 0.0.0.0:3000 (Stream)

Oct 22 12:29:44 pipeline-foreman-server-nightly-debian10 systemd[1]: foreman.socket: Socket service foreman.service already active, refusing.
Oct 22 12:29:44 pipeline-foreman-server-nightly-debian10 systemd[1]: Failed to listen on Foreman HTTP Server Accept Sockets.
Oct 22 12:29:44 pipeline-foreman-server-nightly-debian10 systemd[1]: foreman.socket: Socket service foreman.service already active, refusing.
Oct 22 12:29:44 pipeline-foreman-server-nightly-debian10 systemd[1]: Failed to listen on Foreman HTTP Server Accept Sockets.

@ekohl
Copy link
Member

ekohl commented Oct 22, 2020

I think Puma 5.1.0 will clean things up but @ekohl you have expressed the hardening this provides and I think that's a good thing to have.

Yes. The ever lasting struggle.

Maybe we want dh_systemd_start --no-start? https://manpages.debian.org/testing/debhelper/dh_systemd_start.1.en.html

I think it makes sense not to start the service by default. We have an installer and there are no migration scripts to create the PostgreSQL DB. While it's normally the Debian policy to automatically start services, we're sufficiently different that it makes sense not to.

I would also suggest to cherry pick that to 2.2 if it works in nightly. Not starting a service by default is the default on Red Hat based systems so the installer should handle it perfectly well.

@ehelms
Copy link
Member Author

ehelms commented Oct 27, 2020

This PR seems to be hitting this -- #897

@ehelms
Copy link
Member Author

ehelms commented Oct 27, 2020

Woohoo, the Debian 10 tests passed with the packaging updates

@ehelms
Copy link
Member Author

ehelms commented Nov 18, 2020

I rebased this on latest to re-run the tests given it has sat for a bit.

Copy link
Contributor

@wbclark wbclark left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I installed nightly on centos7 with this PR using Forklift. Playing around with it a bit, I can see the socket at /run/foreman.sock and the drop in configuration with expected values at /etc/systemd/system/foreman.socket.d/installer.conf(I had to read some docs to understand the purpose of the blank ListenStream=). I tested basic functionality and everything seems working as expected.

I couldn't install nightly on ubuntu 1804 but the failure is with redis.service and seems wholly unrelated to changes here.

@ehelms ehelms merged commit 2473950 into theforeman:master Nov 24, 2020
@evgeni
Copy link
Member

evgeni commented Nov 26, 2020

I think this broke EL8 installs:

% grep 'AVC.*foreman.sock' var/log/audit/audit.log
type=AVC msg=audit(1606342776.512:2783): avc:  denied  { getattr } for  pid=15093 comm="rails" path="/run/foreman.sock" dev="tmpfs" ino=121834 scontext=system_u:system_r:foreman_rails_t:s0 tcontext=system_u:object_r:var_run_t:s0 tclass=sock_file permissive=0
type=AVC msg=audit(1606342862.770:2796): avc:  denied  { write } for  pid=14897 comm="httpd" name="foreman.sock" dev="tmpfs" ino=121834 scontext=system_u:system_r:httpd_t:s0 tcontext=system_u:object_r:var_run_t:s0 tclass=sock_file permissive=0
type=AVC msg=audit(1606342862.804:2797): avc:  denied  { write } for  pid=14900 comm="httpd" name="foreman.sock" dev="tmpfs" ino=121834 scontext=system_u:system_r:httpd_t:s0 tcontext=system_u:object_r:var_run_t:s0 tclass=sock_file permissive=0

there are also other denials:

% grep 'AVC.*vda' var/log/audit/audit.log
type=AVC msg=audit(1606342756.671:2781): avc:  denied  { write } for  pid=15093 comm="rails" name="home" dev="vda1" ino=100696036 scontext=system_u:system_r:foreman_rails_t:s0 tcontext=unconfined_u:object_r:rpm_script_tmp_t:s0 tclass=dir permissive=0
type=AVC msg=audit(1606342756.671:2782): avc:  denied  { write } for  pid=15093 comm="rails" name="home" dev="vda1" ino=100696036 scontext=system_u:system_r:foreman_rails_t:s0 tcontext=unconfined_u:object_r:rpm_script_tmp_t:s0 tclass=dir permissive=0
type=AVC msg=audit(1606342815.244:2789): avc:  denied  { write } for  pid=15466 comm="sidekiq" name="home" dev="vda1" ino=100696036 scontext=system_u:system_r:foreman_rails_t:s0 tcontext=unconfined_u:object_r:rpm_script_tmp_t:s0 tclass=dir permissive=0
type=AVC msg=audit(1606342815.244:2790): avc:  denied  { write } for  pid=15466 comm="sidekiq" name="home" dev="vda1" ino=100696036 scontext=system_u:system_r:foreman_rails_t:s0 tcontext=unconfined_u:object_r:rpm_script_tmp_t:s0 tclass=dir permissive=0
type=AVC msg=audit(1606342839.039:2792): avc:  denied  { write } for  pid=15574 comm="sidekiq" name="home" dev="vda1" ino=100696036 scontext=system_u:system_r:foreman_rails_t:s0 tcontext=unconfined_u:object_r:rpm_script_tmp_t:s0 tclass=dir permissive=0
type=AVC msg=audit(1606342839.039:2793): avc:  denied  { write } for  pid=15574 comm="sidekiq" name="home" dev="vda1" ino=100696036 scontext=system_u:system_r:foreman_rails_t:s0 tcontext=unconfined_u:object_r:rpm_script_tmp_t:s0 tclass=dir permissive=0

@evgeni
Copy link
Member

evgeni commented Nov 26, 2020

why is /run/foreman.sock system_u:object_r:var_run_t:s0? foreman-selinux has a file context that makes it system_u:object_r:foreman_var_run_t,s0 (and that would be allowed to be written by httpd)

@evgeni
Copy link
Member

evgeni commented Nov 26, 2020

hah, /run/foreman\.sock regular file system_u:object_r:foreman_var_run_t:s0 -- well, it's not a regular file.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants