-
Notifications
You must be signed in to change notification settings - Fork 336
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Impact of OCSP on SOP #530
Comments
I would be absolutely shocked if you found agreement that would be a good or desirable thing, as you're describing what is rightfully a core OS feature that has historically been managed by a team not working on the browser. this was historically true for Mozilla as well, until recently. I think the answer is that these services (OCSP, CRLs, AIA) are very much outside, and it's simply worth acknowledging and being aware of. OCSP is not compatible with preflights, for example. |
Even if there always was some sort of logical separation, I don't see how that matters for SOP. |
@annevk I fail to see how you believe SOP is violated, given that the data is not made available to the page, the cookies are not used, Fetch is not used (they literally use different network stacks), that it's shared among all applications (arbitrary apps can write into the cache), and shared among users. You're taking this view of the primacy of the browser and I'm telling you that it isn't. This is the same context for WebCrypto that caused it to be delayed several years - because no one, besides Mozilla and (for unrelated reasons) Chrome, wanted to or is willing to do Crypto in the browser itself. That is, the spec has to conform to the system capabilities, not the other way around. The verification of a certificate is a black box input to the Web Platform. Certs go in, an answer comes out. That is quite literally the interface on some platforms. Reimplementing all of that solely in order to explain the platform is unlikely to find any support, except for perhaps Firefox, whose fetching of anything is itself new. On the other bug, you've suggested it was to protect intranet pages, but that isn't clear how that's achieved, given you can just use an img tag to synthesize the GET request. |
@sleevi if the request is identical to what can be achieved by (I don't think whether the data is made available matters (that is what CORS on the response is for, not a CORS preflight), cookies don't matter (we don't allow requests without credentials to do more), Fetch not being used doesn't matter (it's still a request resulting from user action in the browser).) |
I'm not saying this is necessarily bad or needs to be fixed, I'm just trying to determine the theoretical boundary of what requests can be made from the browser (or underlying systems instructed by the browser), as those effectively define SOP. |
@annevk Credentials matter if the concern was ambient authority bleeds - fetches with credentials can do more, hence cors-anonymous, which is effectively at play. I'm not sure I understand your concern or threat-model for headers, so perhaps if you could expand, I could document what implementations do. For example, are you concerned with 3P added headers or headers added by the implementation as part of processing? What is the underlying concern? |
The "concern" is again the private intranet not expecting a request from a browser (or the system it's built on) that includes such a header and therefore doing something unexpected. (Again, it's not really a "concern" or necessarily a "threat", it's just figuring out where the boundaries are. We restrict what |
I'm not sure "or the system it's built on" is in scope. For example, any application (and not just browsers) can cause these requests to be issued, so addressing this on an application-by-application basis (of which the browser is just another application in its host environment) doesn't seem a reliable or reasonable path. I suppose I had hoped that there was a more articulated threat model. For example, it's certainly reasonable to suggest that allowing arbitrary application control of headers would represent a risk, since that is an unbounded set of potentially hostile inputs. Similarly, making requests with ambient authority, or allowing access to data, does present risk. I had not thought of the "SOP" protections as part of restricting the set of networking requests that can/should be made, but merely how application-defined access/control behaved, given JS. I get the feeling there's a different interpretation you may be working with, hence the confusion about how this "violates" SOP. For example, should the browser (or any application) take an obligation to protect a server from the defined semantics of RFC 6960 (or its predecessor RFCs for which it obsoletes)? No, I don't think so, because it would have to be universally enforced by all applications to be meaningful, and it's not. That is, put differently, it doesn't make sense to reimplement this functionality in the browser if, for example, loading a webpage with OpenGL will load a GPU driver that will fetch an XML DTD from an HTTPS URL and then trigger the OS subsystem for fetching it, thereby introducing the problem (which happened with at least one driver when interrogating about its DirectX capabilities). For what its worth, the relevant specifications are
To be clear, I agree it's good to find out where we draw our security boundaries and how we draw them. But I think I disagree very much with the initial statement:
I view a layered approach as being beneficial, just like we don't specify behaviours around, say, TCP Fast Open (which Edge implements, the Chrome team experimented with, and Firefox hopes to experiment), because that's handled by a different layer. Same with the HTTP vs SPDY or HTTP/2 discussions - those requests were left as exercises to the protocol. OCSP, AIA, and CRLDPs, plus any other requests related to servicing the "verify a certificate", are, to me, a call out to a blackbox where it's up to that implementation to define, whether or not it uses HTTP. This would be similar to a printer system that used UPNP (which uses HTTP) to discover printers. |
I don't see how the layers matter. Whether it's an API or some subsystem that triggers the request, the end result for the receiving server is the same. |
I see. I think we're unlikely to make progress then, because I believe the layering matters, especially for defining the security models and assurances. |
But you haven't explained how. How does it end up mattering to the server receiving the request? |
You haven't explained how or why it's the browser's (or any applications') responsibility to protect servers from the user's OS or implementation of standard-defined behaviours. My interpretation of your position is that the desired end-state is for a system to be built on a defined architecture that fully implements everything from an instruction set to the 'browser', because that's the only way you can be assured the degree of control. To an extent, this is what Firefox has practiced in some key and critical areas (e.g. PKI and Crypto) that doesn't work with how other browsers and systems are designed. If your concern is one of predictability, that predictability is afforded - by other specifications - or intentionally left opaque (in the case of security policy). You have yet to demonstrate why it is the browser's responsibility to control or reimplement what the OS does or provides for all applications and shares between all applications. The fact that other applications will interact with and can induce these actions to happen, regardless of any browser mitigations, are to me demonstrations that it's a false security boundary. |
I'm not saying any of that. |
@annevk So then why is this a concern for Fetch? Because a browser interfaces with a (generally) OS API that makes a network request, correct? How is that similar/to different from printing (e.g. over a network), triggered by |
It's not a concern for Fetch. Fetch just seemed like a good place to have the discussion, since it touches on limitations we place on requests in the browser. But those limitations might not make sense if they're not enforced everywhere. |
Again, I'm interested in exploring the various escape hatches that exist (and you've helped a lot by pointing some of them out and describing them), be it in the browser or outside, to figure out to what extent our own policies are grounded or are on shaky ground. |
Limitations on what the browser makes, or what the site is allowed to control in what the browser makes? My understanding is that the limitations are more-so targeted to what the site is allowed to control and influence, so that we can have a defensible and consistent boundary that "Sites can't control X" and "Sites can't control Y" - but that's not a statement that X or Y cannot or will not happen as part of the browsing experience. |
Yes, we're holding sites to a different standard. I think that's icky. If preventing sites from doing X and Y actually accomplishes something, letting other layers do X and Y undoes that protection. |
I disagree. We wouldn't let sites spawn calc.exe, but the browser spawning calc.exe as part of servicing some action is perfectly reasonable. I think of requests in that same bucket. The security boundary is above the 'spawn calc.exe'. That doesn't mean there isn't one. |
If the browser makes a request with method DELETE to a third-party-controlled URL as a result of some action that would be problematic. Agree/disagree? |
Disagree / Depends on what other factors contributed before that request
was made. For example, if a chooser or confirmation was shown, this could
be argued as fine.
|
Without user interaction, totally invisible, since we're still contrasting with |
Full control over URL? Agree |
Okay, so then you agree there is an overall line and we're just arguing over the details of it. To be clear, I agree that exceptions can be reasonable, because a lot of the request would be fixed and only the URL can be controlled, or it's a GET with a single new header that's not obviously bad, but it would be nice if we could derive that from some principles rather than judgment. |
@annevk Well, I didn't include the other qualification ;) Which is that if it was a matter of downloading a file, which automatically opened, and the opening-application then did the DELETE request, we then have the question of "Is this a dangerous file or not?", and there, I actually lean towards "not" - e.g. it's ok to download & open. But that's not my area of expertise. |
@sleevi non-browser applications "violating SOP" is very different from what we discussed thus far (and I tried to exclude from the scope, since the way those are allowed to run is quite a bit different and OS-dependent). |
I'm trying to find where we draw the line :)
So automatically opening an app which then makes a URL request is out of
scope.
Making the URL request directly is (maybe) in scope.
And we're still not sure where calling an OS API is :)
|
Let's try another way to frame it. User uses the browser to navigate to X. As a result of that action, what requests can the browser or the OS make that "violate SOP" (I hope we have a shared understanding on what this means) that are not visible to the user and are controlled by a third-party. (I'd also be somewhat interested in those that are visible to the user, except for downloads and navigating to a non-HTTP scheme, as that still seems troublesome, but also less important.) |
@annevk I'm asking for the basis of how you include the OS in scope but downloads/user interaction out of scope. To me, they seem the one and the same. That is, I don't question that what the browser code does in on scope, to the extent the browser is required to implement functionality X, and functionality X is not defined by some other specification. But if it is not code the browser is required to implement - whether it be because it is a function of the executing application (e.g. downloads) or of the OS (TCP FastOpen, DNS resolution, PKIX) - then I don't think it should be in scope. We see this spec/security bleed in other areas - like @mikewest with "Let localhost be localhost" - and I feel we should approach those things with fear and trembling. Every time the browser attempts to encroach on the purview of other specs, we make the platform exceptionally more complex. I don't think you'd disagree with this, but that's why I'm trying to understand where and why you feel the OS should be considered in scope. If I could try a different way of expressing it, it seems like our disagreement or debate is whether the goal is to restrict all requests or simply browser-initiated requests. Does that match your understanding of the discussion so far? Or is it that you think regardless of API and spec, by virtue of a request happening as part of a page load, then regardless of subsystem it constitutes browser-initiated? |
@sleevi when the user takes action we've already been comfortable breaking SOP. WebRTC screen sharing for instance breaks SOP all over (or did that never ship?). I want code that the browser is not required to implement to be in scope for analytical purposes. You keep talking about constraining it, but I've never made such a suggestion I think. I just want to know what's possible and to have something to contrast SOP restrictions with. E.g., if such code can request arbitrary URLs and also dictate the HTTP method or a header value of a particular HTTP header, that would be interesting information and might mean SOP is overly strict for the non-credential case. |
@annevk I'm not sure I agree it should be in scope for analytical purposes. I suspect we're still at an impasse there. I'm also not interested in relaxing the SOP. I'm trying to start from a consistent baseline - define what is in scope of SOP and what is not - so that we can then have meaningful discussions surrounding that. I don't feel like we've got a good understanding (yet) about where the boundary of what is in scope (and should thus observe SOP) and what is not in scope (and thus what shouldn't necessarily, and/or might not be) |
Just noted there's a typo in the subject (and I can't correct): OCSP, not OSCP :) |
Since I don't know of a better place to put it and don't want to continue having the exploration in bifurcation/expect-ct#18 which is really about something else, this seems like a good a place as any.
@sleevi pointed out that OCSP is only done by Firefox directly and other browsers use the OS stack. And only a name-constrained subCA is able to make third-party requests (requests to arbitrary endpoints, determined solely by that party).
This issue is interesting to figure out where the SOP line is drawn as that can tell us whether we are too strict elsewhere or accidentally tell servers they can rely on certain invariants that are actually false, etc.
I think my main issue is that even if it's not the browsers now (except for Firefox), it could be the browser tomorrow. After all, all the pieces of the OS needed to make a browser are part of the browser ecosystem and have to be considered. This might be more self-evident with systems such as Chrome OS.
The text was updated successfully, but these errors were encountered: