-
Notifications
You must be signed in to change notification settings - Fork 13
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cacher incompatible with sys-whonix (updates fail for whonix-ws and whonix-gw) #10
Comments
On Fri, Aug 19, 2022 at 07:57:05AM -0700, tlaurion wrote:
When cacher is activated, whonix-gw and whonix-ws templates cannot be updated since both whonix-gw and whonix-ws templates implement a check through systemd qubes-whonix-torified-updates-proxy-check at boot.
Also, cacher overrides whonix recipes applied at salt install from qubes installation when the user specifies that he wants all updates to be downloaded through sys-whonix.
The standard place where qubes defines the update proxy to be used is under `/etc/qubes-rpc/policy/qubes.UpdatesProxy` which contains on standard install:
```
$type:TemplateVM $default allow,target=sys-whonix
$tag:whonix-updatevm $anyvm deny
```
And where cacher "rudely" overrides those settings, confusing the user into understanding why things won't work, per:
https://github.com/unman/shaker/blob/3f59aacbad4e0b10e69bbbcb488c9111e4a784bc/cacher/use.sls#L7-L9
First thing first, I think both cacher and whonix should agree on where should be applied where UpdatesProxy settings should be prepended/modified, which I think historically it should be under `/etc/qubes-rpc/policy/qubes.UpdatesProxy` for clarity
@unman @adrelanos
Thanks for this.
I don't use Whonix and the the people who have used this up to now don't
use it either.
In 4.1 the canonical place for policies is /etc/qubes/policy.d - I dont
have `/etc/qubes-rpc/policy/qubes.UpdatesProxy` on a "standard
install".
The standard place where the update proxy is defined is in
/etc/qubes/policy.d/90-default.policy.
I dont know why Whonix still uses the deprecated file - it could be
easily changed in the updates-via-whonix state.
The caching proxy is entirely agnostic about the path. It will use the
default netvm.
So if the default netvm is set under Whonix-gateway the updates will run
via Tor. If not, then in the clear.
It looks as if the application of Whonix currently is incompatible with cacher
since both want to be the default UpdatesProxy.
One solution might be that if cacher detects that the Updates Proxy is
set to sys-whonix, it sets itself as UpdatesProxy with netvm of
sys-whonix regardless of what the default netvm is.
Similarly, the "update via whonix" state should set netvm for cacher if
that package is already installed, and cacher is set as the default Proxy.
If the Whonix qubes are not to have the benefit of caching then it would
be trivial to exclude them from the proxy. The question is about how to
treat the others.
Thoughts?
|
@unman : My bad... Seems like I have non-defaults which are artifacts of my own mess, playing on my machine with restored states :/
Wiping I understand that everything needs to be under 30-user.policy:
So 30-user.policy being aplied prior of 90-default.policy should have precedence, will dig that out and reply with results. |
@adrenalos:
So is it |
@unman @adrenalos:
Just checked dom0 snapshot against a clean 4.1.0 upgraded to 4.1.1 state restored from wyng-backup, passed through qvm-block to a dispvm: @adrenalos: I confirm that https://github.com/QubesOS/qubes-mgmt-salt-dom0-virtual-machines/blob/master/qvm/updates-via-whonix.sls is the one being deployed locally:
Still i'm confused on the absence of results for
So not really understanding as of now where QubesOS/qubes-issues#7586 (comment) 90-default.policy changes for whonix are coming from. |
And this is where trying to follow down the rabbit hole seems to have lost me for a bit:
Expecting to find that configuration file under Qubes repo leads to nowhere with a quick search....
Possible alternative solutions
This is what I have now. Not sure about other users use cases, but I do not specialize whonix-gw nor whonix-ws as I specialize fedora and debian templates. This is unfortunate in the long run, since debian-12 will eventually land into available repositories and would benefit as well of ngcache. 2- Have whonix torrified check revised and comply with qubes 4.1 to not compete with other update proxies
|
Finally understood what is happeningOk, learned a different part of whonix today digging down the cause of the above issue (templates refusing to use the exposed proxy). It seems that whonix templates are shutting their local proxy access through 127.0.0.1 if when doing Which tinyproxy does return, and for which whonix enforces checks to know if updates are to be downloaded through tor or not, and since whonix templates are preconfigured in Qubes to actually depend on sys-whonix which is supposed to offer that service by default (this is part of /etc/qubes/policy.d/90-default.policy) at:
I think it would be adviseable that cacher reports the same functionality support to not break what seems to be expected. @unman I think whonix approach is actually right. Work needed@unman :
2- You should force removal of /etc/qubes-rpc/policy/qubes.UpdatesProxy when installing cacher.rpm, and add your own:
3- Something isn't right here at cacher package uninstall: Lines 67 to 70 in 3f59aac
It doesn't look like if you are removing the cacher lines for update proxy here, more like a copy paste of split-gpg: Lines 38 to 41 in 3f59aac
Probably a bad copy paste, but I think you would like to know before someone complains that uninstalling cacher makes gpg-split defunc. @adrenalos @MaMarek: That's it folks. The rest would work as is. And sorry for the white noise before. That was not so easy to understand :) |
Prepending is mostly unspecific to Whonix and implemented by Qubes. Qubes is currently moving away from I'd suggest asking Marek for guidance which file name in
I don't like the bypass idea. To use the override for testing, |
I missed the other replies before making my prior reply and didn't read all posts there yet. There's a typo in my nickname btw. |
To close this issue and to have cacher (apt-cacher-ng) work out of the box on top of a default installation with sys-whonix having been toggled to have all updates being done under sys-whonix at install or per qubesctl salt deployment, the current salt script for sys-whonix creates a prepends a new line under cacher needs to prepend that when deployed, so that all Templates use cacher instead of sys-whonix, and currently does so under 90-default-policy.policy is already deploying the following, enforcing whonix templates (only those, as opposed to all templates in prececent line) to use sys-whonix for updates: 90-default-policy.policy also specifies in its header to not modify that file Considering that all policies are moved to the new directory, it seems that @marmarek @adrelanos : the question is where that line should be prepended per that salt recipe in a PR? @unman @adrelanos: this is why I suggested for cacher to just wipe that file (do the right thing) when applying cacher (which replaces the policy so that all templates use cacher, after having cacher expose what whonix expects in its check with userinfo.html in previous post) @adrelanos said:
My implementation recommendation above (exposing under apt-cacher-ng userinfo.html what whonix-proxy-check expects) is another approach that would not invade whonix templates, and would not require anything else then cleaning up the old artifact above and understand Qubes config files numbering convention, if any)
@unman @adrelanos : as said earlier, if cacher was applying the suggestions at #10 (comment), it would not be incompatible with upstream fixes in salt recipes to be applied in dom0 for next installation. The main down the road is if a user reapplies salt recipe to have sys-whonix to download templates updates without uninstalling cacher.rpm (which should do the right thing at uninstall). Otherwise, all templates will have repositories configured to go through apt-cacher-ng and will obviously fail. This is why you are all tagged here. This needs a bit of collaboration. Thanks for helping this move forward! |
@adrelanos : any reason why whonix templates are not actually testing a tor url instead of relaying on the proxy itself to provide modified tinyproxy provided web page in whonix-gw template stating that sys-whonix is providing tor? The templates should probably verify the reality, that is, that tor urls can be accessed (subset of sdwdate urls?) Otherwise, this is a dead hand and cacher already dropped whonix templates as of now, letting them use sys-whonix and not interfering in any way. This is a bit sad, since cacher is not caching all the packages downloaded per templates as of now nor in any foreseable future unless Whonix templates change their proxy test to something that tests the reality of tor being accessible, and not binding whonix templates to sys-whonix. In the current scenario, the ideal would be to have cacher the update proxy, and have cacher use sys-whonix as its netvm. While having whonix templates failing as per current failsafe mechanisms if the templates are discovering that cacher is not using sys-whonix )or anything else torrifying all network traffic through tor). |
You mean like check.torproject.org? Because it's bad to hit check.torproject.org over clearnet. Also system dependency for basic functionality on a remote server should be avoided. |
I see. So basically, there is no way to:
Basically, the only foreseeable option to have whonix templates cached by cacher (as all other templates that can rewrite their update URLs) would be if cacher was replacing sys-whonix's tinyproxy. @adrelanos @unman : any desire to co-create a cacher-whonix adapted salt recipes and have it packaged the same way cacher is to have all templates updates cached? |
Modified op. |
If sys-whonix is behind cacher... I.e. whonix-gw-16 (Template) -> cacher -> sys-whonix -> sys-firewall -> sys-net or... debian-11 (Template) -> cacher -> sys-whonix -> sys-firewall -> sys-net ...then cacher could check if it is behind sys-whonix and if it is, similarity to Whonix's check, cacher could provide some feedback to the Template. How would cacher know it's behind sys-whonix? Options: ask its tinyproxy or maybe better, use qrexec. In any case, this breaks Tor stream isolation. But that's broken in Qubes-Whonix anyhow: For stream isolation to be functional, APT (apt-transport-tor) would have to be able to talk to a Tor
I don't think I can create it but seems like a cool feature. Contributions welcome. |
That cant be the right solution.
We don't want cacher to *always* return "tor proxy", since it's in the
users hands whether it routes through Tor or not.
In fact returning that is completely unconnected to whether routing is
via Tor. (The case is different in Whonix where that file can be
included and Tor access is supposedly guaranteed.)
I suggest that cacher set:
qubes.UpdatesProxy * @tag:whonix-updatevm @default allow target=sys-whonix
qubes.UpdatesProxy * @tag:whonix-updatevm @AnyVM deny
qubes.UpdatesProxy * @type:TemplateVM @default allow target=cacher
IF the current target for @type:TemplateVM is target=sys-whonix, then
the netvm for cacher can be set to sys-whonix. That way Whonix users get
caching for all templates EXCEPT the whonix ones.
This wont capture every case, because users may already be using a
tinyproxy connected to sys-whonix, or using an alternative Tor proxy:
I'm not clear how far we want to dig in to those possibilities.
If the option to update over Tor is enabled, and cacher is already
installed and the default target, then there is no need to adjust the
policy rules. Just set the cacher netvm to sys-whonix.
If we do this then the rewrite option should not be applied in Whonix
templates - is there any grain e can access specific to Whonix?
@adrelanos - some input please.
|
Small update: My past userinfo.html file was/is a hack linked to my attempt to make whonix templates able to interact with cacher. I recommend leaving cacher-template alone (not installing any other software) and modifying cacher-template through dom0 call with root user since the cacher-template doesn't have sudo (which is good): And then adding the following lines into /usr/lib/apt-cacher-ng/userinfo.html (patch format)
Once again, this will break the contract of whonix, guaranteeing that proxy (normally tinyproxy) is always torified, and requires users to have cacher have sys-whonix as its netvm to function properly with tor+http urls out of the box. @adrelanos @unman: i'm currently lost on what upstream path will be taken to have cacher useable with cacher. |
I'll repeat that I don't believe that this is the right approach.
Confirmation that the caching proxy is connected over Tor must not rest on a
static configuration setting.
|
Anything that turn Whonix, the all Tor OS into a "maybe sometimes not" Tor OS is considered to be an unacceptable solution.
Agreed. Since that would be risky (could produce leaks).
As that ticket is going, long standing without progress: I would speculate it to be none.
The status quo. That's imperfect but at least not insecure. |
Thanks Patrick
The only options are:
1. incorporate cacher into sys-whonix - not wanted by Whonix dev
2. Make "Tor confirmation" dependent on online Tor test - e.g onion
test I outlined. Issue here would be *more* leaking of information that
Tor is used.
3. Status quo - Whonix qubes do not use caching.
|
Agreed.
I don't understand this part "is there any grain e can access specific to Whonix?".
Good. Though small detail:
This should be Qubes / Whonix default anyhow.
Also OK with me. However, @tlaurion won't be too happy about it. The goal of this ticket was allowing users to cache all updates, including the Whonix Templates. That would surely be a nice feature. But from Whonix perspective, then the updates would need to be routed through Whonix-Gateway without risk of going through clearnet. Not sure if I mentioned that earlier... For this level of functionality, cacher would have to implement a similar "am I torified" check that Whonix Templates use. But either unman didn't like this idea and/or nobody volunteering to implement it. Therefore I don't think this functionality will happen. Otherwise, back to the drawing board. Maybe the Whonix Template "am I torified" check would need to be implemented in a nicer way. That is, because then it would look more appealing to re-implement these into cacher. Suggestions welcome. Maybe some qrexec based implementation. A Whonix Template would ask "Does the Net Qube I am connected to directly or through multiple hops torified?" If yes, permit updates. Otherwise, fail closed. |
Indeed.
Not wanted by Whonix dev.
Agreed. To add:
|
I can easily add such information into qubesdb, using https://github.com/QubesOS/qubes-core-admin-addon-whonix/blob/master/qubeswhonix/__init__.py. Specifically, some value saying "my netvm has
EDIT: the above is about the actual netvm, not how updates proxy is routed. |
My mistake in my suggestion in my previous post. Primarily in this ticket it's about UpdatesProxy settings. (I shouldn't have said Net Qube since these are set to These are good questions. I am not sure yet. Here are some thoughts. Example 1) Simple, ideal. So let's say for example... Let's start with a simple and ideal example...
This one should work fine. Example 2)
In this example, connections to onionized repositories won't be possible. (Becuase the VPN IP is the last in the chain and won't support connections to onions.) But since this is generally not recommended a a minority use case, we can probably ignore it. Example 3)
This one should work fine too. Thinking from a script inside of
Now that I was thinking more about it, it's about both. UpdatesProxy and Net Qube. Anywhere in the chain or in a specific position, I am not sure about yet. In theory, this could be super complex. Parsing the full chain (an array) of the UpdatesProxy and/or Net Qube chain. Probably best avoided. Perhaps something like:
[1] Might be useful to know because depending on if connected to anon-gateway or cached the APT repository files need to be modified. [2] Used by leak shield firewall to avoid non-torified connections.
Indeed. |
Fair enough. Generally, I'd prefer UpdatesProxy (whichever implementation it is) to announce itself whether a) it uses Tor to access updates, and The "a" is necessary to avoid leaks, the "b" is nice-to-have to choose repository addresses. In practice, we approximate "b" with "a", which is okay for the current needs. Is setting the flag as a magic header in an error page an issue? It isn't the most elegant way, but it works. If we want a nicer interface, I have two ideas:
The second approach IMO is nicer, but technically could be inaccurate (policy can redirect services to different targets based on the argument, if you explicitly set the argument there). That said, if user really want to bypass this check, they always can do that. All the documentation and tools we have operate on wildcard ( BTW, all of the above (and the current situation too) have TOCTOU problem. The connection is checked only at the startup, but user can later change both netvm connection and also redirect updates proxy. A qube can see when its netvm was changed, but have no way to see when the qrexec policy was changed to redirect updates proxy elsewhere (other than re-checking periodically...). That's a corner case that's probably okay to ignore, but better be aware of its existence. |
@unman @adrelanos : opinions on @marmarek previous implementation suggestion would make this important matter go forward. |
I was waiting if there's some input from @unman.
Sounds similar to how Whonix Templates are currently testing if they're connected to torified tinyproxy.
Not sure what's the difference to above here but also sounds ok.
Sounds nicer indeed.
Yes. And if users really want to do complicated modifications to do non-standard stuff they should have the freedom to do so. That's good for sure. The common convention in FLOSS is not to add artificial user freedom restrictions.
Good.
Whonix Templates could wrap APT and do the check before calling the real APT but indeed. Still imperfect.
Yes. |
This ticket is missing issue tracking tags. |
EDIT:
When cacher is activated, whonix-gw and whonix-ws templates cannot be updated anymore, since both whonix-gw and whonix-ws templates implement a check through systemd qubes-whonix-torified-updates-proxy-check at boot.
Also, cacher overrides whonix recipes applied at salt install from qubes installation, deployed when the user specifies that he wants all updates to be downloaded through sys-whonix.The standard place where qubes defines, and applies policies on what to use as update proxies to be used is under/etc/qubes-rpc/policy/qubes.UpdatesProxy
which contains on standard install:Whonix still has policies deployed at 4.0 standard place:
And where cacher write those at standard q4.1 place
shaker/cacher/use.sls
Lines 7 to 9 in 3f59aac
First thing first, I think both cacher and whonix should agree on where UpdatesProxy settings should be prepended/modified, which I think historically (and per Qubes documentation as well) it should be under/etc/qubes-rpc/policy/qubes.UpdatesProxy
for clarity and not adding confusion.Whonix policies needs to be applied per q4.1 standard under Qubes. Not subject of this issue.
@unman @adrelanos @fepitre
The following applies proper tor+cacher settings:
shaker/cacher/change_templates.sls
Lines 5 to 13 in 3f59aac
Unfortunately, whonix templates implement a sys-whonix usage check which prevents templates to use cacher.
This is documented over https://www.whonix.org/wiki/Qubes/UpdatesProxy, and is the result of
qubes-whonix-torified-updates-proxy-check
systemd service started at boot.Source code of the script can be found at https://github.com/Whonix/qubes-whonix/blob/98d80c75b02c877b556a864f253437a5d57c422c/usr/lib/qubes-whonix/init/torified-updates-proxy-check
Hacking around current internals of both project, one can temporarily disable cacher to have torified-updates-proxy-check check succeed and put its success flag that subsists for the life of that booted Templatevm. We can then reactivate cacher's added UpdatesProxy bypass and restart qubesd, and validate cacher is able to deal with tor+http->cacher->tor+https on Whonix TemplatesVMs:
1- deactivate cacher override of qubes.UpdateProxy policy:
2- restart qubesd
3- Manually restart whonix template's torified-updates-proxy-check (here whonix-gw-16)
user@host:~$ sudo systemctl restart qubes-whonix-torified-updates-proxy-check
We see that whonix applied his state at:
https://github.com/Whonix/qubes-whonix/blob/98d80c75b02c877b556a864f253437a5d57c422c/usr/lib/qubes-whonix/init/torified-updates-proxy-check#L46
4- Manually change cacher override and restart qubesd
5- check functionality of downloading tor+https over cacher from whonix template:
Problem with this is that qubes update processes will start templates and try to apply updates unattended, and this obviously won't work unattended.
The question is then how to have whonix templates do a functional test for it to see that torrified updates are possible instead of whonix believing he is the only one providing the service? The code seems to implement curl check, but also doesn't work even if cacher is exposed on as a tinyproxy replacement, listening on 127.0.0.1:8082. Still digging, but at the end, we need to apply mitigation ( disable Whonix check) or proper functional testing from Whonix, which should see that the torrified repositories are actually accessible.
How to fix this?
Some hints:
1- cacher and whonix should modify the policy at the same place to ease troubleshooting and understanding of what is modified on the host system, even more when dom0 is concerned. I think cacher should prepend
/etc/qubes-rpc/policy/qubes.UpdatesProxy
2- Whonix seems to have thought of a proxy check override:
https://github.com/Whonix/qubes-whonix/blob/685898472356930308268c1be59782fbbb7efbc3/etc/uwt.d/40_qubes.conf#L15-L21
@adrenalos: not sure this is the best option, and I haven't found where to trigger that override so that the check is bypassed?
3- At Qubes OS install, torrified updates and torrifying all network traffic (setting sys-whonix as default gateway) is two different things, the later not being enforced by default. Salt recipes are available to force updates through sys-whonix when selected at install, which dom0 still uses after cacher deployment:
So my setup picked up sys-whonix as the default gateway for cacher since I configured my setup to use sys-whonix proxy as default, which permits tor+http/HTTPS to go through after applying manual mitigations. But that would not necessarily be the case for default deployments but would need to verify, sys-firewall being the default unless changed.
@unman: on that, I think the configure script should handle that corner case and make sure sys-whonix is the default gateway for cacher if whonix is deployed. Or your solution wants to work independently of Whonix altogether (#6) but there would be discrepancy between Qubes installation options, what most users use and what is available out of the box after installing cacher from rpm:
shaker/cacher.spec
Lines 50 to 60 in 3f59aac
4- That is, cacher cannot be used as of now for dom0 updates either. Assigning dom0 updates to cacher gives the following error from cacher:
sh: /usr/lib/qubes/qubes-download-dom0-updates.sh: not found
So when using cacher + sys-whonix, sys-whonix would still be used by dom0 (where caching would not necessarily makes sense since dom0 doesn't share same fedora version then templates, but I understand that this is desired to change in the near future.
Point being here: sys-whonix would still offer its tinyproxy service, sys-whonix would still be needed to torrify whonix templates updates and dom0 would still depend on sys-firewall, not cacher, on a default install (without whonix being installed). Maybe a little bit more thoughts needs to be given to the long term approach of pushing this amazing caching proxy forward to not break things on willing testers :)
@fepitre: adding you to see if you have additional recommendations, feel free to tag Marek if you feel like so later on, but this caching proxy is a really long awaited feature (QubesOS/qubes-issues#1957) which would be a life changer if salt recipes, cloning from debian-11-minimal and sepecializing templates for different use cases, and bringing a salt store being the desired outcome from all of this.
Thank you both. Looking forward for a cacher that can be deployed as "rpm as a service" on default installations.
We are close to that, but as of now, this doesn't work, still, out of the box and seems to need a bit of collaboration from you both.
Thanks guys!
The text was updated successfully, but these errors were encountered: