Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[SRI] Support signatures/asymm key #449

Open
devd opened this issue Aug 11, 2015 · 31 comments
Open

[SRI] Support signatures/asymm key #449

devd opened this issue Aug 11, 2015 · 31 comments
Labels
Milestone

Comments

@devd
Copy link
Contributor

devd commented Aug 11, 2015

For a modern website with a lot of JS files, the overhead of loading SHA hashes of each and every file it might need ever adds up to quite a bit overhead. We should allow the page to provide a public key that will sign JS/styles loaded and the browser checks the signature provided in a header in the response

@devd devd added this to the SRI-next milestone Aug 11, 2015
@annevk
Copy link
Member

annevk commented Aug 11, 2015

Then you need to trust the CDN, no?

@devd
Copy link
Contributor Author

devd commented Aug 11, 2015

sorry .. the page provides the public key to trust and the browser verifies signature for each JS file (sent as a header in the JS response)

@devd
Copy link
Contributor Author

devd commented Aug 11, 2015

I updated the description to be a bit more clearer: my apologies. does this make more sense?

@devd
Copy link
Contributor Author

devd commented Aug 14, 2015

I realized: another huge advantage of this would be that we can then fully distrust a CDN: we can remove the CDN from the script-src list and instead only include this public-key in the script-src list. This has been bought up a few times in the current spec as a problem: listing all the hashes that are trusted in a script-src csp policy will be really hard.

@hillbrad
Copy link
Contributor

I kind of like the idea of using a key like a nonce. You provide the key
once, and then annotate each subresource with a signature over the hash of
the resource content. This means you still need to modify/decorate your
body content, but it means you don't need to rely on the host serving the
content to send headers or cooperate / coordinate with your policy
enforcement in any way.

On Fri, Aug 14, 2015 at 1:27 PM Devdatta Akhawe notifications@github.com
wrote:

I realized: another huge advantage of this would be that we can then fully
distrust a CDN: we can remove the CDN from the script-src list and instead
only include this public-key in the script-src list. This has been bought
up a few times in the current spec as a problem: listing all the hashes
that are trusted in a script-src csp policy will be really hard.


Reply to this email directly or view it on GitHub
#449 (comment).

@jonathanKingston
Copy link
Contributor

This is a similar idea that we were exploring at work, where we wanted to
verify the trust of the SRI hash in the first place by using the public TLS
cert key to wrap around the SRI hash.

The core advantage to this would be that a compromised web head wouldn't be
able to start publishing fake integrities unless they could change the cert
(Which in a load balanced system the key shouldn't be on the webhead).

On Fri, Aug 14, 2015 at 9:33 PM Brad Hill notifications@github.com wrote:

I kind of like the idea of using a key like a nonce. You provide the key
once, and then annotate each subresource with a signature over the hash of
the resource content. This means you still need to modify/decorate your
body content, but it means you don't need to rely on the host serving the
content to send headers or cooperate / coordinate with your policy
enforcement in any way.

On Fri, Aug 14, 2015 at 1:27 PM Devdatta Akhawe notifications@github.com
wrote:

I realized: another huge advantage of this would be that we can then
fully
distrust a CDN: we can remove the CDN from the script-src list and
instead
only include this public-key in the script-src list. This has been bought
up a few times in the current spec as a problem: listing all the hashes
that are trusted in a script-src csp policy will be really hard.


Reply to this email directly or view it on GitHub
#449 (comment).


Reply to this email directly or view it on GitHub
#449 (comment).

@devd
Copy link
Contributor Author

devd commented Aug 15, 2015

I think we should do both: the case where cooperation from CDN is needed and the case where cooperation is not needed. At scale, the inclusion of an arbitrary number of signatures/hashes is very painful. And, the cooperating CDN is a perfectly fine model for SRI to target.

@ghost
Copy link

ghost commented Sep 13, 2015

To complement the integrity attribute of SRI, how about an identity attribute which contains the public key that the remote resource is expected to be signed with?

Just like integrity, this attribute can be a whitespace-separated list of identities that are allowed signers of the remote content. The value content itself can be a public key name prefix followed by a dash followed by the base64-encoded public key data. For example, for an ed25519 libsodium identity, something like:

ed25519-1EG6xEDUkN9Mmx8AAXfQMiUbw4uYzLUrfa52sGjSWD8=

In an element, this might look like:

<script  src="https://example.com/script.js"
 identity="ed25519-1EG6xEDUkN9Mmx8AAXfQMiUbw4uYzLUrfa52sGjSWD8="></script>

To have a page-wide policy on all external resources:

<meta name="identity" content="ed25519-1EG6xEDUkN9Mmx8AAXfQMiUbw4uYzLUrfa52sGjSWD8=">

For the HTTP response header, rfc6249 seems very simple and suitable for this purpose:

Link: <http://example.com/example.ext.asc>; rel=describedby; type="application/pgp-signature"

which seems simple enough but does involve another round-trip to fetch the signature file. This is much less complicated than other proposals which try to scrub out the signature line from the content which gets signed. With HTTP2 the extra round-trip can be avoided, so this mechanism has a nice performance upgrade path. I guess a data URI might also work to include the signature data inline, if the signature only refers to the content and not the headers.

For an ed25519 detached signature (crypto_sign_detached), this header might look like:

Link: <http://example.com/whatever.sig>; rel=describedby; type="application/ed25519-signature"

Or using a combined mode signature (crypto_sign), the content itself could include the signature data, but that seems like work for another future version that specifies its own Accept-Encoding style protocol negotiation.

@devd
Copy link
Contributor Author

devd commented Sep 13, 2015

Once we get consensus on actually supporting asymm key, then we have multiple options for how to support it. I am a bigger fan of using the existing attribute over introducing a new one. But in any case, I think for now, this task needs to focus on getting everyone on board about supporting this :)

@jonathanKingston
Copy link
Contributor

@devd would using the HTML resources TLS cert to sign the integrity hard to implement?

So for example:
integrity="sha256-..."

Could be:
integrity="cert-..." where ... is the result of encrypting the integrity with the TLS private key.

@sbp
Copy link

sbp commented Dec 9, 2015

Michael Smith of the W3C introduced me to the SRI work, and @devd suggested that I contribute to this thread. The scenario that I have been looking at is automatically verifying downloaded content using digital signatures.

I have been asked why a hash or an HMAC is insufficient for this scenario. The use case is where you need to verify that the content is endorsed by a person or organisation known to be reputable within a web of trust.

Consider downloading a privacy suite such as Tor. If you look at their download page, you will find that underneath each download button is a (sig) link, and a little explanation:

https://www.torproject.org/download/download.html.en
https://www.torproject.org/docs/verifying-signatures.html.en

On the explanation page, the second of the two links above, the Tor developers provide an excellent rationale for their use of signatures:

An attacker could try a variety of attacks to get you to download a fake Tor. For example, he could trick you into thinking some other website is a great place to download Tor. That's why you should always download Tor from https://www.torproject.org/. The https part means there's encryption and authentication between your browser and the website, making it much harder for the attacker to modify your download. But it's not perfect. Some places in the world block the Tor website, making users try somewhere else. Large companies sometimes force employees to use a modified browser, so the company can listen in on all their browsing. We've even seen attackers who have the ability to trick your browser into thinking you're talking to the Tor website with https when you're not.

Some software sites list sha1 hashes alongside the software on their website, so users can verify that they downloaded the file without any errors. These "checksums" help you answer the question "Did I download this file correctly from whoever sent it to me?" They do a good job at making sure you didn't have any random errors in your download, but they don't help you figure out whether you were downloading it from the attacker. The better question to answer is: "Is this file that I just downloaded the file that Tor intended me to get?"

There are two further important factors which they did not mention. One is that sometimes, for whatever reason, people start to use different keys to sign their content. This even happened with Tor. The download was always signed by Erinn Clark, but now it may be signed by one of a number of Tor developers, presumably to introduce some redundancy to the system. Because signatures are used within public-key cryptography, it was possible for Erinn to sign the new keys as trusted, meaning that anybody who trusted Erinn can now trust the new signatures.

The other factor is that the use of digital signatures decouples the verification protocol from the transport protocol. This means, for example, that you don't have to download Tor from a TLS protected site. You don't have to trust the TLS certificate. You can use HTTP; you can use FTP if you like. You can transmit the files over the least secure connection that you have available, but the presence of a digital signature means that you can always verify what you received, independent of the security of the transport. This allows people to more widely mirror software. I could host a recent copy of Tor, you could download it from me and then verify it without having to trust me or any of the infrastructure that I have used to send you the file.

Unfortunately, there are drawbacks to this process, and that is what I have attempted to solve with the publication of the following Internet-Draft:

https://www.ietf.org/id/draft-palmer-signature-link-relation-00.txt

The problem described, and potentially solved, by this I-D is that downloading and verifying a signature is presently an ad hoc and laborious process. The links to a signature are just provided in text next to a download. The user has to be aware what a signature means, and why they should download it at all. They may have to install software to check the signature for them. They certainly have to manually enter the files into this verification software. The user interfaces of such software are commonly criticised.

All this is madness. The browser should be doing this for the user, and wrapping it all up into a user friendly interface. The same user interface guidelines that have resulted in people trusting TLS should be applied to public key cryptography on the web. But most importantly, the process should be automated. Imagine if you had to download a TLS certificate and establish the authenticity of every HTTP request yourself, manually! You wouldn't bother; and similarly, people are deterred from checking signatures because the browser doesn't do the right thing, which is to check them automatically.

I wrote and had the IETF publish this draft before I was aware of the SRI work. To check a signature, you have to provide some metadata that the browser can read. The metadata would have to be standardised, hence the I-D. The idea is to use a link relation, much like rel=describedby as suggested by @substack earlier in this thread, to associate content with a digital signature. This is trivial and common sense. The more difficult decision was in how to associate a link target with a link relationship, because in HTML the rel attribute only links the present document to the target of the link. What is needed in the digital signature case is to link the target of the link to another further arbitrary resource, which in this case is a digital signature.

There was no existing mechanism for doing this in HTML, so I made one up. It's called the rels attribute, and it's a sequence of link relation to value pairs. It's quite simple, but of course other models are possible, and some have been suggested by Martin Janecke and I on www-html. As long as there is some mechanism for making this work, that's fine, but of course one has to be careful in setting precedents for extending HTML, especially where a generic mechanism such as link relationship is concerned.

So that's where I'm at right now. I see that the Subresource Integrity work is focused on providing hash based verification of scripts, so signature based verification of downloads may be some step. But I think it is certainly, as Michael Smith suggested, closely related to your work and I'd love to hear what you think.

@mikesir87
Copy link

Hello all! New guy on the block here... I too was playing around with this idea before I heard about SRI and have been a little shy to mention anything about it. But, saw the latest message go across the mailing list (thanks @sbp) and decided to say hi.

My idea of using signatures actually based on using OpenPGP. The script tag essentially specifies an ID for the key being signed and an URL for the key and for the signature. The page fetches (and caches the key), fetches the signature and source file, validates the signature and source, given the public key.

Why do it this way? Two things...

  • If I can specify a key ID for the signature, I don't need to go and update my hashes each time a new update for a CDN-hosted file is updated. If the signature uses the same key, execute the script. Otherwise, don't trust it.
  • As a future step, it might be possible to utilize the Web of Trust model (for more info, read the PGP documentation. There could be a script-src tag specifying a single key that represents the main site's key. Then, as the signatures are fetched for the individual scripts, a simple check is needed to see if the main site trusts the signing key. If it doesn't don't execute the script.

I made a working prototype of this idea, although it's not aligned with the SRI model. But, feel free to take a look at http://pgp.mikesir87.io/signedJs/

Feel free to post feedback/further ideas. Thanks!

@sbp
Copy link

sbp commented Dec 9, 2015

If I can specify a key ID for the signature

You shouldn't do this because IDs can be trivially forged. The 32-bit ID only has a space of 2 ** 32 = 4,294,967,296 IDs, so it is possible to use brute force key generation until you get a desired one. Always use the full fingerprint, and not just the key ID. More information on this here:

https://help.riseup.net/en/security/message-security/openpgp/best-practices#dont-rely-on-the-key-id
https://security.stackexchange.com/questions/74009

Your signature discovery algorithm is to append a file extension:

location : srcLocation + ".asc",

I think it would be better to allow aribtrary locations of the signatures, though at obvious cost in terms of markup verbosity. The use of text/x-javascript-signed and text/x-javascript-plain is probably unnecessary too because you could just standardise an attribute to be used in place of data-key-id.

Having said all of that, I think your work is great—you're thinking about similar issues to me and the SRI team, and you have running code to address the problem. Using the key fingerprint as a makeshift hybrid of the hash and signature models, to avoid the need to consult the web of trust, is also interesting.

@mikesir87
Copy link

Thanks for the feedback!

Use of full fingerprint IDs... awesome. I can make that change.

Discovery of the signature... yeah. I was going to come up with a better idea there. Could be another attribute either on the script tag itself or with the SRI attributes. Was good enough to get a small prototype working though. :)

The use of text/x-javascript-signed and text/x-javascript-plain was only used to prevent the browser from auto-fetching and executing the resources and allow my code to do it, since I don't know enough to move the prototype into the browser code itself.

@sbp
Copy link

sbp commented Dec 9, 2015

What happens if you remove the type attribute entirely, and then inject the ...-signed media type based on the presence of even just the existing data-key-id attribute?

I'm not sure that I understand the use of ...-plain. Isn't that equivalent to a regular script that has no associated signature? You're loading it through:

document.querySelectorAll("script[type^='text/x-javascript-']");

And then passing it to a RemoteScript handler, but it appears that this handler isn't doing anything that the browser wouldn't normally do. What was the plan for it?

@mikesir87
Copy link

The reason I had to make the -plain type was that the order of JavaScript source execution matters, meaning I need to verify and execute each script in order. Since the signed ones were first, I couldn't allow the non-signed version to use the normal type of text/javascript.

I could probably use a single type and base the handling difference based on the presence of the data-key-id attribute.

I also tried removing the type attribute entirely, but some browsers just make an assumption that it's JavaScript and load the files.

@sbp
Copy link

sbp commented Dec 9, 2015

Thanks for the explanation!

Issue #497 is closely related to this issue. @mozfreddyb suggested there using an integrity attribute with <a>. The case may be made that the resolutions to #449 and #497 should be as integrated and consistent as possible. One alternative therefore is extending the integrity prefixes to allow digital signatures instead of just hashes:

integrity="rfc4880sig-filepathhere"

This would however mean that the attribute value no longer aligns with, and would instead be an extension of, the CSP2 source list syntax:

http://www.w3.org/TR/CSP2/#source-list-syntax

So that may not be desirable.

@hillbrad
Copy link
Contributor

hillbrad commented Dec 9, 2015

The first step here is to give a clear threat model for the problem.

With SRI "level 1", there were many options we explored early on, and we
arrived at one clear threat model which we felt the browser was in the best
position to mitigate. A site X wants to use a CDN Y. If Y has lower
security than X, or Y is used by many Xs (think the jQuery CDN) an attacker
may target Y in order to transitively compromise X through a subresource
include like a script. The browser is in a position to receive information
from X, knows how to trust X already from the existing web PKI, and is in a
position to intervene by declining to include the corrupted resource.

We specifically excluded, at this time, even attaching integrity attributes
to download anchors. Even though it is an obvious use case, we didn't feel
that the last condition was satisfied - the browser is not well-positioned
to intervene. Because the metadata is delivered in the page context but a
download creates side effects outside that context, if a download was
blocked, the user might simply paste the link directly into the address
bar, at which time there is no integrity attribute to check, and we had no
user research to understand if and how we could effectively communicate why
a download failed in a way that would discourage them from doing this.

There are many more such questions with these proposals, and I think a more
formal threat model is an important first step to understanding what can be
done and what is worth doing. Think about an adversary's goals and how
they would try to achieve them, how they might route around such a feature
or confuse a user, and where on the adversary's "kill chain" is the browser
uniquely positioned to interpose and break that chain, with the
restrictions it has. (e.g. any metadata delivered this way is ultimately
contextual to the resource it arrives with, and is ultimately bound to the
HTTPS PKI trust model for delivery)

So:

Who is the attacker?
What are they trying to do?
How do they do it?
How can the browser stop them?
How and why can the user trust that intervention?
How can the attacker circumvent the intervention, or why can they not?

On Wed, Dec 9, 2015 at 9:31 AM Sean B. Palmer notifications@github.com
wrote:

Thanks for the explanation!

Issue #497 #497 is closely
related to this issue. @mozfreddyb https://github.com/mozfreddyb
suggested there using an integrity attribute with . The case may be
made that the resolutions to #449
#449 and #497
#497 should be as integrated and
consistent as possible. One alternative therefore is extending the
integrity prefixes to allow digital signatures instead of just hashes:

integrity="rfc4880sig-filepathhere"

This would however mean that the attribute value no longer aligns with,
and would instead be an extension of, the CSP2 source list syntax:

http://www.w3.org/TR/CSP2/#source-list-syntax

So that may not be desirable.


Reply to this email directly or view it on GitHub
#449 (comment).

@mikesir87
Copy link

Sounds good @hillbrad! To me, the threat model is basically the same as SRI's original model... prevent tampered JavaScript from executing. It's just using another method to perform the verification, rather than using just a hash.

The main difference is that site X instructs the browser what signing keys are needed in order to trust the resources from site Y. This allows a developer, if using an OpenPGP method, to only supply a key fingerprint and location for the signature. If updates are made to the intended resource, I don't have to update my HTML code to modify a hash, as an updated signature has been created and verifies the code change was intended.

Using Hashes: I reference //code.jquery.com/jquery.min.js, the latest version of the jQuery library. I provide the hash in the HTML. The browser fetches the source. Validates the hash. Executes the code. It works. A new version of jQuery is published. The hash no longer matches. The script isn't executed. Site breaks. Users complain.

Using Signatures: I reference the same source, but provide the key fingerprint and signature file location (however those are to be specified). Browser fetches the source, signature, and key. Verification is made. Code is executed. A new version of jQuery is published. Get the new signature. Since it's signed by the same key, it still validates and the code is executed.

In order for an attacker to circumvent the signature verification, there's only two methods I can recognize (any others chime in):

  1. Modify the source and generate a new signature. This requires the signing private key to be compromised to still execute the malicious code.
  2. Compromise site X or MITM, modifying the key fingerprints/sources. This is an existing attack vector on a hash-based SRI.

Method 1 is a new vector compared to hash-based SRI. But, is still far more secure than no integrity check at all. That's what I've got for now...

@hillbrad
Copy link
Contributor

hillbrad commented Dec 9, 2015

For existing SRI, the party doing the inclusion, "X", has to make their own
decision about the trustworthiness of the /content/, while explicitly
distrusting the person providing it, "Y".

For this new proposal, "X" is choosing to trust "Y" by way of a content
signing key, in addition to trusting "Y" by way of an HTTPS certificate.

How much additional assurance do you get by this, and is it worth the
complexity cost? Certainly one can imagine a scenario in which Y's
webserver is compromised but Y's code signing key is kept offline and only
used in a very carefully controlled software publication process. But one
can also just as easily imagine (and in my days as a consultant, I can
assure I've seen) a situation where the signing key is checked into public
revision control or otherwise trivially obtainable by many adversaries and
represents only a minimal speed-bump.

SRI where X provides the hash gives X a strong guarantee that no matter how
(in)competent Y and those who attack Y are, X is protected if the hash
function is sound.

SRI where X trusts Y's key for code signing gives, "well, maybe this is
better, if Y is more capable than Y's attackers, but you can't really know
how much better". It is (expensive) defense-in-depth that makes roughly
the same guarantees as HTTPS.

So, is that worth it? Do we have evidence of the existence of the class of
attackers and incidents where web server but not code signing keys are
compromised?

On Wed, Dec 9, 2015 at 12:12 PM Michael Irwin notifications@github.com
wrote:

Sounds good @hillbrad https://github.com/hillbrad! To me, the threat
model is basically the same as SRI's original model... prevent tampered
JavaScript from executing. It's just using another method to perform the
verification, rather than using just a hash.

The main difference is that site X instructs the browser what signing keys
are needed in order to trust the resources from site Y. This allows a
developer, if using an OpenPGP method, to only supply a key fingerprint and
location for the signature. If updates are made to the intended resource, I
don't have to update my HTML code to modify a hash, as an updated signature
has been created and verifies the code change was intended.

Using Hashes: I reference //code.jquery.com/jquery.min.js, the latest
version of the jQuery library. I provide the hash in the HTML. The browser
fetches the source. Validates the hash. Executes the code. It works. A new
version of jQuery is published. The hash no longer matches. The script
isn't executed. Site breaks. Users complain.

Using Signatures: I reference the same source, but provide the key
fingerprint and signature file location (however those are to be
specified). Browser fetches the source, signature, and key. Verification is
made. Code is executed. A new version of jQuery is published. Get the new
signature. Since it's signed by the same key, it still validates and the
code is executed.

In order for an attacker to circumvent the signature verification, there's
only two methods I can recognize (any others chime in):

  1. Modify the source and generate a new signature. This requires the
    signing private key to be compromised to still execute the malicious code.
  2. Compromise site X or MITM, modifying the key fingerprints/sources.
    This is an existing attack vector on a hash-based SRI.

Method 1 is a new vector compared to hash-based SRI. But, is still far
more secure than no integrity check at all. That's what I've got for now...


Reply to this email directly or view it on GitHub
#449 (comment).

@mikesir87
Copy link

One thing to mention (as I realize I didn't make it clear earlier) is that party Y is not the one performing the code signing. As part of the developer's build process for the JavaScript source (whether using node, gulp, grunt, etc.), a step is to sign the code using a private GPG key. Then, both the source and signature are published to content server Y, who is only serving static content (thinking a CDN-like environment).

For this new proposal, "X" is choosing to trust "Y" by way of a content
signing key, in addition to trusting "Y" by way of an HTTPS certificate.

Not quite. For an OpenPGP setup, in most cases, X would trust keyserver Z for the content's public signing key. Since X is providing the fingerprint for the key and keys are self-validating, there's no risk in a bad key being delivered. Y is indeed providing the HTTPS certificate, but is only serving as the host of the content, not the creator of the content.

one can also just as easily imagine (and in my days as a consultant, I can
assure I've seen) a situation where the signing key is checked into public
revision control or otherwise trivially obtainable by many adversaries and
represents only a minimal speed-bump

Definitely a valid point and one in which education will be needed. But, this is the case in any code deployment environment requiring code signature. Android and iOS applications already use code signing where this is an issue too. Same thing with anyone that deploys Java artifacts into Maven central.

So, is that worth it?

Good question. Sure... it's overhead to validate signatures. But, then you know for sure that the code you're executing is the code created by the developer that created the signature (assuming the private key wasn't compromised). I don't have to worry about changes in versions,

Do we have evidence of the existence of the class of
attackers and incidents where web server but not code signing keys are
compromised?

No, but if the signature is created by the developer during the build, then any compromise of the web server would result in that outcome (web server compromised but not signing keys). If I misunderstood the question, let me know and I'll be happy to elaborate.

@mikesir87
Copy link

@sbp - I just updated my prototype to be more closely aligned with the current SRI implementation.

I'm going to work on a new version that makes use of a script-src tag that identifies a key that will then leverage the Web of Trust when validating the signatures.

@kojiromike
Copy link

I didn't realize some work was already happening on this front. I began working on a PoC (just using frontend tools) of subresource signing here. I'll try to read up and get involved in the wider community discussion as time allows.

Note: My name is also Michael Smith, but I am not of the W3C. I think there are 36,000 of us.

@kojiromike
Copy link

@sbp On a slight tangent, I read https://www.ietf.org/id/draft-palmer-signature-link-relation-00.txt and immediately thought of the troubling off-browser edge case

curl -sSL https://example.org/script.sh | sudo bash

that is becoming so prevalent. Whereas your draft is focused on the browser, are you aware of any standard that would apply to a tool that could execute arbitrary resources from a URL after verifying their expected signatures?

@Lennie
Copy link

Lennie commented Dec 15, 2015

The notary tool from Docker solves this:
https://github.com/docker/notary

They are using that as the basis for the image verification system:
https://blog.docker.com/2015/08/content-trust-docker-1-8/

Here is their curl example from the github readme:
curl example.com/install.sh | notary verify example.com/scripts v1 | sh

@hillbrad
Copy link
Contributor

I'd like to make one note of caution here: if you are not a member of the WebAppSec WG, we cannot accept submissions unless you are willing to sign a contributor's IPR agreement. This is necessary in order to assure that the W3C can maintain our commitments to produce specifications that are unencumbered by patents and free for everyone to use. Please submit requirements, issues, bugs, etc., but if you want to propose anything which you expect or hope might end up as a normative requirement in a specification, please contact me directly at hillbrad@gmail.com so we can discuss how the WG can accept your contributions. Thanks!

@homakov
Copy link

homakov commented Dec 19, 2016

@hillbrad any way I can become a member of WebAppSec WG? I'm very interested in the subject, but i don't know what the requirements are.

@jyasskin
Copy link
Member

FYI, @mikewest has started a proposal for this in https://github.com/mikewest/signature-based-sri.

@lrvick
Copy link

lrvick commented Sep 20, 2018

With the increase of sophisticated domain takeover attacks on web wallet services like myetherwallet and others, being able to pin signing keys for trusted javascript in a browser has perhaps never been more important.

Has there been any movement on this? If not are there any suggestions on how to get this moving?

@devd
Copy link
Contributor Author

devd commented Sep 22, 2018 via email

@jkrems
Copy link

jkrems commented Dec 31, 2020

Very late to the party but I ran into this while thinking through SRI for web bundles (and dynamic subsets of web bundles). If signature-based SRI would be introduced, would it make sense to include some sort of challenge request header with the public key? Otherwise key rotation seems more painful but I'm not 100% sure that's a risk in practice.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests