Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Goals should include not undoing the work of polyfills #17

Open
bakkot opened this issue Aug 9, 2018 · 25 comments
Open

Goals should include not undoing the work of polyfills #17

bakkot opened this issue Aug 9, 2018 · 25 comments

Comments

@bakkot
Copy link

bakkot commented Aug 9, 2018

Per #16, this belongs in its own issue, rather than that one.

Original comment:

Goals should include not undoing the work of polyfills like https://github.com/WICG/trusted-types which need to patch setters and some getters on element prototypes to gate access to browser builtins.
The pattern of trusting early running code to set up bounds for late running code is a well-established one, and breaking this would make it harder for security engineers to do their jobs.

I agree with this very strongly.

(cc @ljharb, @mikesamuel, @koto from the original thread.)

@domenic
Copy link
Owner

domenic commented Aug 9, 2018

Thanks for filing. This is an explicit anti-goal of the proposal. Browser built-in features do not consult polyfilled or monkeypatched methods; e.g. if you do Object.defineProperty(document, "body", { get() { throw new Error(); } }), then all the browser built-in features that depend on the body element still work; they do not go through your monkeypatch. The goal of this proposal is to bring that same ability to avoid tampering to userland code.

@ljharb
Copy link

ljharb commented Aug 9, 2018

That’s not the case, however, for language builtins - consider Promise, all the RegExp methods, all the SpeciesConstructor calls, etc. If this proposal only provides originals for HTML primordials, and none of those ever call into user-supplied code, then that might be different.

However, there’s still the additional use case of being able to deny access to originals - should that go in a separate issue as well?

@domenic
Copy link
Owner

domenic commented Aug 9, 2018

It is the case for language built-ins. If you do window.Promise = () => { throw new Error(); }, then navigator.getBattery() still returns a real promise.

@bakkot
Copy link
Author

bakkot commented Aug 9, 2018

I appreciate the desire for browser built-in features to be explainable in terms of lower-level features. However, this implementation of that breaks an extremely long-standing assumption on the web, which many many things are built around. I don't believe it's worth it simply to make new features more explainable.

Could we explore other ways of making those features explainable? For example, could we say that the web platform behaves as if there is a script which runs before any others which captures references (in a nested lexical scope) to everything to which this API would provide access, and which defines layered APIs on the global object (or properties thereof) as functions closing over those captured references? To my reading that would not violate the stated layered API constraint, since userland code which runs first could do exactly the same thing.

@domenic
Copy link
Owner

domenic commented Aug 9, 2018

That doesn't work for code which does not run first, such as layered API modules.

@mikesamuel
Copy link

You are free to define goals and non-goals for your proposal, but I would like to avoid surprises down the road.

Fyi, it is a goal of ISE so if the proposal has the effect of preventing such polyfills, ISE will block this from reaching stage 2 on the grounds that it seriously undermines practices which have proven and do prove necessary for securing web applications.

If this proposal is put up for stage 2 approval and it is not clear that this proposal will not have that effect, ISE will block it on the grounds that ISE cannot endorse "the committee expects the feature to be developed and eventually included in the standard" when appsec is a non-goal.

+@koto
+@xtofian

Browser built-in features do not consult polyfilled or monkeypatched methods

Acknowledged. I do not believe this is relevant to the TT polyfill or similar mechanisms.

@domenic
Copy link
Owner

domenic commented Aug 9, 2018

This is not a TC39 proposal; the stage process is not applicable.

@mikesamuel
Copy link

@domenic

Thanks for explaining. Please interpret that as meaning that ISE will seek to block the equivalent of stage 2 for this proposal in all standards fora unless it is clear that appsec is a goal.

@benjamingr
Copy link

@mikesamuel couldn't code that relies on patching objects simply forbid code from calling getOriginals? (Especially as a standard library import).

I might be wrong but code that establishes a sandbox needs to prohibit certain patterns of JS anyway (like loose mode or non variable constrained eval).

@ljharb
Copy link

ljharb commented Apr 13, 2019

@benjamingr the current proposal requires an import map for that, which isn’t under the control of the polyfill code, only the consuming app.

@mikesamuel
Copy link

@benjamingr I'm not sure I understand.

The purpose of documenting a design goal is to constrain which mechanisms are used by a design.

If the mechanism chosen is unmitigable then existing, simple mitigations may not effectively mitigate newer versions of existing libraries that use those mechanisms, breaking the ability of security mechanisms to continue to evolve alongside the code they seek to protect.

@benjamingr
Copy link

I'm sorry, I don't understand any of that (maybe because English is not my first language).

If the mechanism chosen is unmitigable

I'm asking if "don't allow imports from std:global" isn't a viable strategy to mitigate the "unmitigable" part in security sensitive code.

@ljharb
Copy link

ljharb commented Apr 15, 2019

@benjamingr i can’t write a module or script that disallows that. All i can do is out prose in my readme telling them to add an entry to their import maps.

@mikesamuel
Copy link

I'm asking if "don't allow imports from std:global" isn't a viable strategy to mitigate the "unmitigable" part in security sensitive code.

Good question.

Probably not. Bans of widely used APIs only work if the codebase is developed from the beginning with the tool that enforces the ban, and the development team never plans on using third-party library code that uses those APIs, or the source code is pre-compiled to rewrite all uses.

@benjamingr
Copy link

@ljharb @mikesamuel

@benjamingr i can’t write a module or script that disallows that. All i can do is out prose in my readme telling them to add an entry to their import maps.

Probably not.

I'm not a security expert - but isn't this how tools that do capability theory enforcement work? It's been a while since I've read Mark's paper (cc @erights ) but IIUC the reason it works is because containment can be proven (with escape analysis) which requires banning certain language features to begin with namely:

  • strict mode is required.
  • eval is constrained to be lexical (variable bound eval).
  • APIs that the script is exposed to are explicitly given.
  • Timing attacks (on the GC, renderer or JIT for example) are supposed to be dealt with "elsewhere" and not in the sandbox.

That is, I don't think this proposal makes it any harder or easier to write secure code (but again, my perspective is that of a consumer of those APIs rather than a committee member)

@erights
Copy link

erights commented Apr 16, 2019

At #12 (comment) @bzbarsky asks

I'm also not sure why the comment @erights made was marked off-topic, since it's a direct answer to some of the questions asked in this thread about security models....

referring to #12 (comment) . I am not sure either. I see no answer to @bzbarsky's question in that thread. @benjamingr , you are asking important questions. Where should we continue this discussion?

@benjamingr
Copy link

@erights I think it's interesting to discuss but if history taught me something is that for proposals (like this one) to progress:

  • The champion needs to be willing to put up with a lot of pressure and churn.
  • There can't be significant backlash (even for small issues, even for bikeshedding) about the proposal contents.
  • There needs to be more buy in from committee members than backlash.

I'm willing to bring this proposal up in the Node.js summit to see if we can get more discussion and "buy in" (as a consumer and members) and see if we can move this forward.

That said, that depends on @domenic willing to push this forward and engage. Without a champion I don't think this proposal is going anywhere.

@benjamingr
Copy link

And as a side note - I think web security in light of side channels (like the JIT, syscall timings, GC and other common timing attack enablers) is extremely understudied and interesting and if you'd like I'm willing to discuss it elsewhere regardless. It's definitely also something the Node.js security team can look into or allocate resources for.

@erights
Copy link

erights commented Apr 17, 2019

Hi @benjamingr , This proposal in its current form both fails to meet its stated goals, and is slightly harmful. If this proposal were fixed so that it does meet its stated goals, it would then be a disaster. Fortunately, a true fix is likely impossible; but an attempted fix can still impose great costs pointlessly.

This proposal purports to make the originals unconditionally available, such that some JavaScript code cannot create an execution context in which the originals are denied to other JavaScript code. It fails to do that. However, it does make it more expensive for JavaScript code to do that, for example by the with(aProxy) trick used by the realms shim and explained at Report on Realms Shim Security Review and Securing EcmaScript, presentation to Node Security. This trick is expensive in the short term, which would make get-originals somewhat harmful. The Realms proposal would officially make get-originals censorable and virtualizable, without this extra expense, so get-original would only be slightly harmful long term by being made pointless.

Were this proposal made stronger so that it was no longer so easy to work around, for example, by making the originals available by syntax rather than scope lookup, then it would raise the short term costs substantially, but would still be made useless by the Realm proposal once implemented. However, if the stated goal of this proposal were to gain consensus such that Realms could no longer disable it, then we'd have these costs long term. However, it could still be disabled by parsing and rewriting. So the proposal would still fail to achieve its stated goals, while pointlessly imposing costs on lots of other code and execution. The needed parsing and rewriting costs would be paid at page load time, which is a painful time to pay a performance cost. Were this proposal to go forward in such a repaired state, without being disabled by Realms, then I would push for a standard parser or quasi-parser (template string tag) and standard ast for JavaScript, to lower the cost and raise the fidelity of user-level rewriters.

I can't imagine how this proposal could be made strong enough to meet its stated goals. By Turing universality, rewriting or interpretation would seem to be a universal solvent preventing any such unconditional access. However, I could imagine that there may be variations of this proposal that would make such workarounds prohibitively expensive. That would be a disaster.

For example, at the SES Challenge Demo Page, we demonstrate how to prevent code from reading non-overt (side and covert) channels, even egregious ones, by denying that code the ability to sense duration. At one point @matt- responsibly disclosed to us Date.now() can be accessed despite it being disabled which is a special case of denied object are still accessible by creating a new child realm.

The get-originals proposals seems to have the stated goal to make it impossible to fix such security vulnerabilities. The most get-originals can actually do is raise the cost of fixing them by shim, pointlessly imposing these costs in the short term. Later proposals like Realms will then need to remove these costs, making get-originals completely pointless. We've seen before how early mistakes, like much of sloppy mode, then provoke later compensating kludges, like the poisoning of .caller and .arguments. Better not to introduce the mistakes in the first place.

@benjamingr
Copy link

@erights you know Mark, you are one of the brightest researchers I read on the topic and your papers on this topic are really inspiring but when you start with things like:

This proposal in its current form both fails to meet its stated goals, and is slightly harmful. If this proposal were fixed so that it does meet its stated goals, it would then be a disaster.

It makes it much harder to collaborate with you on this. I don't know if you notice but it comes off as very aggressive. It might just be me being "English as a Second Language" but interacting with you on something that is literally on your research topic is a very intimidating position for me to interact from to begin with.

Not to mention you have years of "know how" in very specific terminology I don't understand very well (I do try to read and keep up though :)).

I will read the links you posted (it might take a day or two) so I don't come off as a clown when I try to respond :) I have to be fair and say I still don't understand why it's not possible to simply restrict the get-original import (through a loader or import maps for example) in scripts you don't want to give access to the originals to. If that's answered in any of the links feel free to skip answering it and I'll just get there in my reading :)

@erights
Copy link

erights commented Apr 17, 2019

it comes off as very aggressive

Sorry. I agree it is inappropriate. My apologies. I will tone it down. In any case, the emotions behind my inappropriate appearance of aggression are not due to you in any way. I deeply regret I that I caused you such feelings.

@Alhadis
Copy link

Alhadis commented May 3, 2019

Maybe this is an overly-simplified perspective, but IMHO when built-ins, prototypes and globals are tampered with, it's either because

  1. A developer has a damn good reason to,

Or,

  1. An attacker has managed to get hostile code running early on a web page, meaning there's a serious security hole with badly-written or poorly-audited code.

In other words, you're restricting developers on what their code can do, in order to mitigate damage elsewhere from sloppy/half-assed development practices.

Also, how does one go about testing code that depends on other code leveraging "pure" prototypes if their test-runner framework relies on monkey-patched Function and Object prototypes? This could have subtle, pernicious effects on anything that generates "mostly complete" output, omitting a few unit-tests that never appeared in the output because some library author felt they had to utilise a "purified" Array global.

At the end-of-the-day, everybody's lives are made more complicated, library authors have a whole new Realm of design decisions to be split upon ("what parts of our API should we make “tamperable”?), and the ecosystem will wind up split over unreliable tooling and bickering between project authors over implementation details.

(On a light-hearted note, if we're going to expose low-level attributes to JavaScript devs, may I suggest we start with [[IsHTMLDDA]] …? 😉)

@benjamingr
Copy link

I'm still midway the reading list I got here but I'll respond to this:

Maybe this is an overly-simplified perspective, but IMHO when built-ins, prototypes and globals are tampered with, it's either because

It's also possible that you're an SDK, library or platform and you want to rely on the globals being the way you expect them to. Sometimes people override global things or polyfills do and you need untampered globals.

Think for example a user has a bug relating to Array.prototype.map and they want to hunt down usages so they do something like:

Array.prototype.map = () => { debugger; }

But now they hit it a million times in Node.js core or in jQuery or in another library, SDK or platform.

"Damn good reason to" is a good argument when you are not the third party code that runs inside other code and needs to rely on the safety of the platform.

@mikesamuel
Copy link

@benjamingr

Third-party code runs at the suffrance of first-party code.

It is the first-party code that the user has trusted so the web platform should privilege first-party code over third-party code and give first-party code the tools to protect their users' interests.

First-party code benefits when some third-party code can program defensively against other third-party code, but first-party code does not benefit when third-party code can bypass access decisions made by first-party code.

@benjamingr
Copy link

@mikesamuel absolutely, and to be more concrete I also I think we all agree that it is important
that first party code has the ability to prevent third-party code from obtaining unmodified globals.

That doesn't mean that all code would want to do so or that it should be the default. I think it's similar to how the CSP works where websites can enable it for sensitive parts and libraries know how to avoid forbidden features (like eval) in those cases.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

7 participants