You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The subresource integrity spec notes in section 3.4 that:
A future revision of this specification is likely to include integrity support for all possible subresources, i.e., a, audio, embed, iframe, img, link, object, script, source, track, and video elements.
I've poked around and found a couple sort-of-related topics, but they tend to focus on downloads:
Those discussions seem to get tied up (I only skimmed) in questions related to CORS and other things related to downloads that I don't think apply to images, so I thought I'd start a new thread here.
Motivation
At my organization, we do a lot of work to make legal data available. A new service we're launching will host pictures of judges on AWS S3 so that organizations can integrate those judges into their websites and applications. We have a Python project that lets you do something like this:
Then, once you have the link, you pop it into your application and you're set to go.
However, before rolling out this service, I did a short threat analysis: If our clients are hotlinking these judge photos into their websites, how can they be sure we're not serving them porn, or worse. If we get hacked, what could happen to our users of this service?
The best answer that I could come up with is that they could proxy the images through their servers and do a integrity check before serving them from their own domain. Alternatively, there's a JavaScript version of subresource integrity but, well, it's patented (!) and there's a 99% chance it doesn't work as well as the browser does.
Subresource integrity would be the perfect solution to this problem. If we could ship hashes with our Python package, users could easily make a tag like:
I doubt we're a big target for hackers, but I wish we could give this advice to our users.
Others
Extending this a bit, I could see this being pretty useful for forums and even for image hosting platforms:
Random forum could calculate a hash the first time an image is posted by a user and then protect itself from that hotlinked image changing to something lewd or malicious.
Google Photos could use integrity hashes to make sure that its own image hosting servers have not been hacked. (I'm assuming they have a distributed architecture with photos in one place and a DB of those photos elsewhere.)
Conclusion
I'm guessing this will go nowhere because there's not a browser vendor backing this (yet?) and there's not a lot of subresource integrity momentum these days, but I think this could be a useful tool and I'd be interested in getting the conversation going if there's interest.
The text was updated successfully, but these errors were encountered:
As a point of reference I've been part of some work to allow users to download their data from a platform for data portability reasons. This is similar in principle to the archive downloads that platforms like Facebook, Instagram and Twitter provide. We have a manifest expressed as HTML that a user can open locally on their computer to browse local resources like images and media files. It would be great if we could use the integrity attribute to allow the archive to be verified.
Background...
The subresource integrity spec notes in section 3.4 that:
I've poked around and found a couple sort-of-related topics, but they tend to focus on downloads:
Those discussions seem to get tied up (I only skimmed) in questions related to CORS and other things related to downloads that I don't think apply to images, so I thought I'd start a new thread here.
Motivation
At my organization, we do a lot of work to make legal data available. A new service we're launching will host pictures of judges on AWS S3 so that organizations can integrate those judges into their websites and applications. We have a Python project that lets you do something like this:
Then, once you have the link, you pop it into your application and you're set to go.
However, before rolling out this service, I did a short threat analysis: If our clients are hotlinking these judge photos into their websites, how can they be sure we're not serving them porn, or worse. If we get hacked, what could happen to our users of this service?
The best answer that I could come up with is that they could proxy the images through their servers and do a integrity check before serving them from their own domain. Alternatively, there's a JavaScript version of subresource integrity but, well, it's patented (!) and there's a 99% chance it doesn't work as well as the browser does.
Subresource integrity would be the perfect solution to this problem. If we could ship hashes with our Python package, users could easily make a tag like:
I doubt we're a big target for hackers, but I wish we could give this advice to our users.
Others
Extending this a bit, I could see this being pretty useful for forums and even for image hosting platforms:
Random forum could calculate a hash the first time an image is posted by a user and then protect itself from that hotlinked image changing to something lewd or malicious.
Google Photos could use integrity hashes to make sure that its own image hosting servers have not been hacked. (I'm assuming they have a distributed architecture with photos in one place and a DB of those photos elsewhere.)
Conclusion
I'm guessing this will go nowhere because there's not a browser vendor backing this (yet?) and there's not a lot of subresource integrity momentum these days, but I think this could be a useful tool and I'd be interested in getting the conversation going if there's interest.
The text was updated successfully, but these errors were encountered: