-
Notifications
You must be signed in to change notification settings - Fork 74
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Consider surfacing sanitization fallbacks to the developer #21
Comments
What might be useful is to allow registration of callbacks that are invoked when a sink is supplied with a value of a type other than the appropriate TrustedType for the sink (e.g. if .innerHTML is assigned a plain string, of a value of TrustedURL, or something else). The callback must be a function that returns the correct TrustedType for the sink (if some other type is returned, an exception results). This requirement avoids the need for security reviewers to make special efforts to find implementations of such callbacks: The callback mechanism itself is not privileged. Of course in many cases the callback's implementation will use the Such a callback mechanism would be useful for a number of reasons:
With the latter point in mind, it might make sense for the platform to have the non-configurable behavior to always throw an exception if a non-trusted-type is assigned to a sink (either directly if no callback is invoked, or returned from the callback if so). |
The polyfill allows us to enforce any behavior in the sinks, including a configurable behavior for throwing, logging and sanitizing values. However it might be problematic to have that much freedom in the native version, as the simplicity of the native TT lies in them being based on the type system (we disable the string-based versions of the sinks). Having a custom implicit string-to-TT conversion complicates that a bit. The approach you're suggesting seems to encourage the design where the strings (or application-specific wrappers over them) continue to be used throughout the application, and the conversion would be done at the sinks themselves. I understand why this might be useful for existing types implementation (e.g. SafeTypes or Angular types), but I think it would be beneficial for those types to wrap over TT instead of strings. That said, I feel there is a need for allowing custom, but explicit string-to-TT conversion, in the form e.g. installing sanitizers, URL filters and such. Those user-defined sanitizers could do the conversions you need, e.g. there might be a sanitizer accepting a To solve the custom logging issue in the native version, can we have an exception handler per-type, @mikewest? Something like:
|
The approach you're proposing (no implicit sanitization/conversion at sinks; all sanitization has to be explicitly invoked) is certainly more principled. However, it's important to note that (unless I'm missing something), there is no security difference between that and implicit sanitization at the sink. My reasoning is this: There are two basic ways to establish that a value is a member of a given trustedtype:
(Aside: Most types have some members that are established by either type. For some types, there's no general way to establish membership at runtime, i.e. they only can be established based on provenance. TrustedScriptURL is an example: There is no general predicate that can decide if a given URL points to something trustworthy or not (one can however construct application-specific predicates; e.g. by checking that a URL points into a directory on a particular host that is known to serve trustworthy static assets)) Obviously, in case (2), the code needs to pass the value in the form of the type -- the type serves to attach provenance to the value. However, for values that can be turned into a trustedtype based on a runtime predicate/transformation, it really doesn't matter from a security perspective if that happens near the source or the sink, or somewhere in between: TrustedURL.sanitize makes a TrustedURL for any arbitrary input. Calling it just before assignment to HTMLAnchorElement#href (explicitly or implicitly) is just as safe as calling it somewhere near the beginning of the flow of that value. There is a difference from a functional correctness perspective: Sanitization can fail. E.g. TrustedURL.sanitize would reject custom scheme URL (say, whatsapp://), even though a human reviewer could establish that they're safe. If such sanitization happens implicitly near the sink, this results in a functional bug. It'll be fairly apparent if there is test coverage (the link won't work). However, could still be somewhat tedious to figure out where the value comes from and was originally constructed, and that's the place where the code is best refactored to use a custom builder that supports this scheme. This is one reason why implicit near-sink sanitization should be optional. Note that the implicit-conversion hook as proposed in #21 (comment) does not enlarge the TCB: the platform requires the callback/hook to return a value of the correct type for the destination context. This means that the callback can use public, inherently-safe builder APIs for the types, e.g. |
Based on experience with the SafeHtml types, it's my impression that allowing plain strings to flow to sinks (DOM API wrappers, or contextually escaping/sanitizing template systems) and "just work" in most cases, was fairly important for this approach to be practical. The safe/trusted-types builders are a bit more cumbersome than simple string concatenation/formatting, and not requiring developers to deal with that except in rare, unusual scenarios (e.g, URLs with custom schemes, HTML markup passed from a backend to a frontend), seems to be helpful. Intuitively, developers are used to shoving strings into sinks, and it seems desirable to preserve that where it doesn't conflict with security goals. FWIW, in our code base, there are approx 10 times as many calls to |
I see the point. So, in general, you propose something akin to:
This looks interesting, and might be a boon for adoption, especially in relation to mikewest/tc39-proposal-literals#2 - in the fallback we could check for literalness, and that would remove the need for rewriting all sinks. We'd have to make sure that:
I'll let @mikewest comment on the actual implementation difficulty. In a polyfill, it's trivial. |
Yes, that's what I had in mind. Perhaps hyperlink URLs are an even better example: For HTML, you could argue that HTML-sanitizing on the fly is not a clean approach in the long run (but might be expedient when adapting existing code). And if one is assigning plain text, one should just use With URLs, we'd do something like,
There are two plausible ways of handling actual values that aren't accepted by the sanitizer (e.g. In the Closure libraries we originally went with the former, but have then added debug-only asserts (i.e. it throws an exception in debug builds, but assigns innocuous values in prod builds). |
I like the idea of supporting fallback to literals here. In Closure, we do this at the static checking step (the js-conformance rules allow It's pretty common to see code like that (or more common, |
Good point re the need to lock down configuration of fallbacks. This is not a security concern (assuming the platform enforces that the correct type comes out of the fallback). However you're right that it would be a functional-correctness nightmare if every library assumed it can install its fallbacks. If we go with the capability/singleton approach in #33 (comment), one way of achieving the lockdown is to expose the |
Does the below summarize the non-interface ideas above? Scenarios
Use cases
|
Merging this with #63. It seems like having a platform-provided fallback mechanism would really ease the adoption. This could be implemented in several ways:
Is there any preference to implementing one of the above variants in the browser? cc @mikewest |
Perhaps the
TrustedTypes.createPolicy('fallback-allow-http', (p) => {
p.expose = true;
p.createHTML = (s) => { isHttpUrl(s) ? s : throw TypeError('Nope') }
})
// programmatic fallback
document.addEventListener('securitypolicyviolation', (ev) => {
if (ev.violatedDirective == 'trusted-types') {
console.log(ev.target, ev.propertyName); // e.g. <a>, 'href' - propertyName would have to be added to the event.
if (ev.requestedType == 'TrustedURL') {
// That property doesn't exist now in the event.
// It could be inferred from ev.target & ev.propertyName.
try {
const url = TrustedTypes.getPolicy('fallback-allow-http').createURL(ev.blockedURI);
ev.target[ev.propertyName] = url;
ev.preventDefault();
} catch(e) {}
}
}
}); |
This might be hard as the event object as specced currently purposefully doesn't include a lot of information - e.g:
It's mostly meant to contain data that might be exported off-domain to 3rd party CSP log analyzers, whereas in TT case it's meant to help us recover in-document. It's also harder to polyfill an event-based solution, as the browsers limit what properties an event object might have. |
e.g. when TrustedURL gets replaced with about:invalid.
The text was updated successfully, but these errors were encountered: