Does firejail improve the security of my system? thoughts by @rusty-snake #4601
Replies: 7 comments 40 replies
-
This wasn't my main point in the PG discussion. I was comparing it to Flatpak. The problem with using SUID was just one thing I brought up. The main reason I argued for Flatpak usage over Firejail is its usability (all Flatpak packages are confined by a set of permissions by default and the user can easily use a GUI like Flatseal to toggle those permissions), integration with the rest of the ecosystem (Portal, GNOME, etc). My ideal setup (especially for a new user who had just come to Linux) is Wayland + Pipewire API for audio + Flatpak (which, again, integrates nicely with Portal) + Flatseal. That way they have some what of a permission control system on their Linux computers. I just don't see how Firejail fits in. I also think modifying firejail profiles and making them would be easier for the user than just changing toggles in Flatseal. Firetools would have been nice if it could actually save the profile it creates for later use. |
Beta Was this translation helpful? Give feedback.
-
For xkcd-1200, Firejail could support using several UIDs (real, not subuids) under control of a main UID. This would of course need to be enabled at system configuration, not by an user. Then simple DAC controls (chmod 0700, groups) could be used to control which UID can access which files. Some folders could be shared (tradeoff of convenience vs. security). For example, Linux firewalls can use different rules based on the UID (NFT: |
Beta Was this translation helpful? Give feedback.
-
I think there were also interesting discussions around BeEF (Browser Exploitation Framework), but I can't find it. |
Beta Was this translation helpful? Give feedback.
-
For protection against executing arbitrary code with full user rights, firejail is great. I also think that it is far better than nothing against targeted attacks by intelligence agencies. But, firejail's security can be greatly improved by splitting out privileged parts as separate binaries. AppArmor can be combined with firejail, and I can introduce both firejail and AppArmor gradually per each application. Firejail can protect users from targeted attacks to some degrees because most targeted attacks don't rely on exploits in SUID binaries or kernel. For example, |
Beta Was this translation helpful? Give feedback.
-
Wouldn't running browsers in docker or podman be even better? |
Beta Was this translation helpful? Give feedback.
-
Shouldn't it be mentioned that the risk of Firejail being a SUID can be mitigated? |
Beta Was this translation helpful? Give feedback.
-
By the way, if intelligence agencies are your threat model, you should flash coreboot on a computer without CPU backdoors. BadBIOS utilizes speakers and microphones as network interfaces because sound is a medium for wireless communication. All of that cannot be blocked by virtual machines or firejail. Those highly privileged backdoors have direct access to RAM. As long as CPU backdoors and other highly-privileged backdoors like BadBIOS remain as the lowest hanging fruits, finding exploits in firejail or virtual machines doesn't make much sense. Intelligence agencies already got you by the balls with CPU backdoors and other highly privileged backdoors in UEFI and BIOS. |
Beta Was this translation helpful? Give feedback.
-
Previous/related discussions:
Related PRs:
THIS IS MY OPINION
Virtually all the arguments against firejail can be summed up in "Firejail
worsens security by acting as a privilege escalation hole because it is complex
and requires to be SUID." We all know that complex means bugs and bugs in SUID
programs cause more damage. Normally we would be done here and could say "this
program worsens security". However, since firejail is a program that is
supposed to increase security, maybe the overall balance is positive (i.e. the
security gains outweigh the new risks introduced). But is the overall balance
actually positive or not? Well, it depends on your thread models.
Privacy Guides has a very good article about Thread Modeling (https://www.privacyguides.org/threat-modeling/)
that defines five questions:
I will answer these questions for myself (a graphical Linux system used by only
one user) however, I think that these answers are at least roughly correct for
most of you.
1. What do I want to protect?
The Confidentiality and Integrity of my system. That includes that my password
and keys are kept secret, my emails private, my github account secure and my
pictures/documents/code/... are not being manipulated.
1.1. What do I not want to protect?
The Availability of my system because I break it myself more often than I am
even attacked.
The kernel and the root user. Although their integrity is a prerequisite for
the above points, it is not the goal. Meaning I scare remote>kernel>password
attacks but not remote>firefox>kernel attacks if remote>firefox>password is
possible too.
2. Who do I want to protect it from?
Cyber Crooks who loitering on the internet and preying on random victims.
2.1. Who do I not want to protect it from?
Intelligence and targeted attacks. If they are in your thread model
use Tails, Whonix or Qubes OS.
3. How likely is it that I will need to protect it?
Cyber Crooks usually attack Windows systems, IoT-the-s-stands-for-security-devices
and Servers.
Such attacks don't use unpublished zero-days. Instead they use publicly known
(and often patched) vulnerabilities or Social Engineering. Therefore install
updates! and don't follow "to install this cool program just
curl | bash
"instructions.
4. How bad are the consequences if I fail?
It depends on the attack but one of the worsts would be that someone does criminal things and I am blamed for it.
5. How much trouble am I willing to go through to try to prevent potential consequences?
This is an important point, because you can have too much security. I could use
myself, compile seccomp filters myself, setup D-Bus filtering myself, do the desktop
integration myself (TBH I did this for firejail) and know a lot of my system and
the program I want to run. Except of a few experts no one of use can do this correct
(and no you aren't an expert). You will either have a weak sandbox because you forgot
things, don't know about things or made mistakes or you will have a strict sandbox, a
too strict sandbox and give up sandboxing.
Summary
I am worried that an attacker can execute commands with full user rights.
An attack who execute commands with full user rights is effectively already root.
https://xkcd.com/1200/
Beta Was this translation helpful? Give feedback.
All reactions