Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Preventing downloading images or objects until they are visible in the viewport #2806

Closed
JoshTumath opened this issue Jul 1, 2017 · 90 comments
Labels
addition/proposal New features or enhancements needs implementer interest Moving the issue forward requires implementers to express interest topic: img

Comments

@JoshTumath
Copy link

JoshTumath commented Jul 1, 2017

See PR #3752

Problem

Many websites are very image heavy, but not all of those images are going to be viewed by visitors. Especially on mobile devices where most visitors do not scroll down very much; it is mostly the content at the top of the page that is consumed. Most of the images further down the page will never be viewed, but they are downloaded anyway.

This is slowing down the overall page load time, unnecessarily increasing mobile data charges for some visitors and increasing the amount of data held in memory.

Example workaround

For years, the BBC News team have been using the following method to work around this problem. Primary images at the top of the page are included in the HTML document in the typical way using an img element. However, any other images are loaded in lazily with a script. For those images, they are inidially included in the HTL document as a div which acts as a placeholder. The div is styled with CSS to have the same dimensions as the loaded image and has a grey background with a BBC logo on it.

<div class="js-delayed-image-load"
     data-src="https://ichef.bbci.co.uk/news/304/cpsprodpb/26B1/production/_96750990_totenhosen_alamy976y.jpg"
     data-width="976" data-height="549"
     data-alt="Campino of the Toten Hosen"></div>

Eventually, a script will replace it with an img element when it is visible in the viewport.

Doing this with a script is not ideal, because:

  1. If the visitor has scripts disabled, or the script fails to load, the images won't ever appear
  2. We don't know in advance the size of the visitor's viewport, so we have to arbitrarily determine which images to load in lazily. On a news article, vistors on small viewports will only initially see the News logo and an article's hero image, but larger viewports will initially be able to see many other images (e.g. in a sidebar). But we have to favour the lowest common denominator for the sake of mobile devices. This gives users with a large viewport a strange experience where the placeholders appear for a second when they load the page.
  3. We have to wait for the script to asyncronously download and execute before any placeholders can be replaced with images.

Solution

There needs to be a native method for authors to do this without using a script.

One solution to this is to have an attribute for declaring which images or objects should not be downloaded and decoded until they are visible in the viewport. For example, <img lazyload>.*

Alternatively, a meta element could be placed in the head to globally set all images and objects to only download once they are visible in the viewport.

* An attribute with that name was proposed in the Resource Priorities spec a few years ago, but it didn't prevent the image from downloading - it just gave a hint to the browser about the ordering, which is probably not as useful in a HTTP/2 world.

@domenic
Copy link
Member

domenic commented Jul 1, 2017

Hmm, this was previously discussed at https://www.w3.org/Bugs/Public/show_bug.cgi?id=17842, but GitHub is more friendly for people. Let me merge that thread into here, but please please please read all the contents of the discussion there, as this is very well-trod ground and we don't want to have to reiterate the same discussions over again.

@wildlyinaccurate
Copy link

I've just spent an hour reading the thread on the original bug report (which @JoshTumath actually reported). There was initially confusion between two features: (1) Being able to tag images as "not important" so that the browser can give priority to other resources. (2) Being able to opt in to loading specific images only at the point where they are in the viewport or just about to enter it. This issue is specifically for (2). I will refer to this as "lazy loading".

The thread goes around in circles and doesn't really have a clear outcome, although the implementations discussed still seem valid and relevant today (Jake's summary in comment 49 is a good point to start at if you don't want to read the entire thread). I'm going to try not to repeat too much from that thread, but it has been 5 years now and as far as I can see lazy loading images is still a relatively common pattern. On top of that, the profile of the average internet-connected device has changed drastically (under-powered Android devices on very expensive cellular connections) and in my opinion the argument for lazy loading images is stronger now than it was 5 years ago.

I'm going to provide some insight into a use case that I'm very familiar with: the BBC News front page. I'll do this in the hopes that it provides some real life context around why I think lazy loading images is important, and why doing it in JS is not good for users.

Loading the page in Firefox in a 360 x 640 viewport from the UK (important because the UK does not get ads, which skews the results), the browser makes the following requests:

  • On the initial load: 49 requests, 314.43 kB transferred.
  • After scrolling a quarter of the way down the page (32% of mobile users reach this point): 57 requests, 373.06 kB transferred.
  • After scrolling halfway (20% reach this point): 66 requests, 437.95 kB transferred.
  • After scrolling to the bottom (1% reach this point): 84 requests, 546.60 kB transferred.

We use lazysizes to lazy load all but the very first article image. Lazysizes makes up about half of our JS bundle size. I know it's overkill for our use case but it's a popular and well-tested library. We load our JS with a <script async> tag, so it can take some time before the JS is executed and the images are inserted into the document. The experience of seeing so many image placeholders for several seconds can be quite awkward. We actually used defer for a while but the delay was deemed too long on slower devices.

From our point of view the benefits of the UA providing lazy loading are:

  • We literally halve the amount of JS in our bundle (although there are several other bundles from other BBC products so the real impact on the user is not that great on this page).
  • The UA can load images earlier, probably as early as DOMContentLoaded.
  • The UA can decide whether to lazy load at all (e.g. only lazy load on cellular connections).

Despite Ilya's arguments against lazy loading in general, we've been doing it for 5 years and we're going to continue doing it until cellular data is much cheaper. If we got rid of our lazy loading, two thirds of our mobile users would download 170kB of data that they never use. Keeping the next billion in mind, that's about 3 minutes of minimum wage work. At our scale (up to 50M unique mobile visitors to the site each week) 170kB per page load starts to feel uncomfortably expensive for our users.

So what do the WHATWG folk think? Is it worth having this conversation again? Is there still vendor interest? Mozilla expressed interest 5 years ago but it seems like nothing really happened.

@jakearchibald
Copy link
Contributor

We literally halve the amount of JS in our bundle

Intersection observers means the JS for triggering loading on element visibility is tiny.

The UA can load images earlier, probably as early as DOMContentLoaded.

That's also possible with a small amount of JS.

The UA can decide whether to lazy load at all (e.g. only lazy load on cellular connections).

Yeah I think browser heuristics (along with no JS dependency) are the remaining selling points of something like lazyload. But is it enough to justify it?

@wildlyinaccurate
Copy link

wildlyinaccurate commented Jul 4, 2017

Intersection observers means the JS for triggering loading on element visibility is tiny.

Yeah, fair call. If we drop our big ol' lazy loading JS for a lazyload attribute we may as well drop it for 10 lines of intersection observer wiring.

I guess the thing that appeals to me most about a lazyload attribute is that it's pretty much the minimum amount of friction you could have for implementing lazy loading, and it leaves all of the nuance up to the UA. In my experience developers don't really know about or care about the nuance of whether their JS is blocking or deferred; runs at DOMCL or load. If there was a big slider that controlled who did the most work UA o-----------|--o Devs I would shift it all the way to UA because devs often don't have the time to do things in a way that provides the best experience for users. I realise this kind of thinking goes against the Extensible Web Manifesto, though. 🙊

@Zirro
Copy link
Contributor

Zirro commented Jul 4, 2017

I can see two more arguments in favour of an attribute. The first is that lazy loading mechanisms which depend on scripts have a significant impact for user agents where scripts don't execute. To prevent images from loading early, the images are only inserted into the DOM later on, leaving non-scripting environments without images at all. Few sites seem to think about the <noscript> element these days.

The second is that providing it through an attribute means that the user can configure the behaviour as they prefer to experience the web. Someone on a slow connection might want to make images start loading earlier than when the image enters the viewport in order to finish loading in time, while someone else with a lot of bandwidth who dislikes lazy loading can disable it entirely.

(In general, I believe it is important that common website practices are standardised in order to give some control of the experience back to the user, or we may eventually find ourselves with a web that is more of a closed runtime than a document platform which is open to changes by extensions, user scripts and userstyles.)

@jakearchibald
Copy link
Contributor

@Zirro those arguments are the "browser heuristics" and "no JS dependency" benefits I already mentioned, no?

@Zirro
Copy link
Contributor

Zirro commented Jul 4, 2017

@jakearchibald I suppose I understood the "no JS dependency" benefit as referring only to having to load less JavaScript rather than the content being available to non-scripting agents, and missed the meaning of "browser heuristics" in your sentence. Still, I hope that detailing the arguments and why they are important can help convince those who are not yet sure about why this would be useful.

@domenic
Copy link
Member

domenic commented Jul 4, 2017

In general non-scripting agents are not a very compelling argument to get browsers to support a proposal, given that they all support scripting :). (And I believe these days you can't turn off scripting in any of them without extensions.)

@Zirro
Copy link
Contributor

Zirro commented Jul 4, 2017

@domenic I would hope that they see the value in having a Web that is accessible to all kinds of agents beyond their own implementations, much like a person without a disability can see the value of designing a website with accessibility in mind.

@JoshTumath
Copy link
Author

In general non-scripting agents are not a very compelling argument to get browsers to support a proposal, given that they all support scripting :).

@domenic The issue is more whether these scripts fail to download, which does lead to an odd experience. It's becoming harder and harder these days to progressively enhance websites as we seem to depend on scripting more and more for the core functionality of our websites.

Yeah I think browser heuristics (along with no JS dependency) are the remaining selling points of something like lazyload. But is it enough to justify it?

I think both of these are big selling points for the reasons above. As I say, this is not something that's possible to progressively enhance. There is not any way to provide a fallback for those for whom the JS fails for whatever reason.

A few years ago, GDS calculated how many visits do not receive 'JavaScript enhancements', which was a staggering 1.1%. Like GDS, at the BBC, we have to cater to a very wide audience and not all of them will have stable internet connections. I have a good connection at home and even for me the lazyloading script can fail to kick in sometimes.

Additionally, I feel as though we haven't covered one of the main issues with this that I mentioned in my original comment:

We don't know in advance the size of the visitor's viewport, so we have to arbitrarily determine which images to load in lazily.

Because we're using a script, we've had to use placeholder divs for most images. While this is great for mobile devices whose viewports are too small to see many images at once, this is really unhelpful on large viewports. It creates an odd experience and means we can't benefit from having the browser start downloading the images as normal before DOMContentLoaded is triggered. Only a browser solution can know in advance the viewport size and determine which images to download immediately and which ones to only download once scrolled into view.

@hartman
Copy link

hartman commented Jul 4, 2017

@domenic The issue is more whether these scripts fail to download, which does lead to an odd experience. It's becoming harder and harder these days to progressively enhance websites as we seem to depend on scripting more and more for the core functionality of our websites.

I completely agree with this. At Wikipedia/Wikimedia, we have seen that interrupted JS downloads in low quality bandwidth situations are one of the most common causes of various problems. And that's also exactly the user situation where you'd want lazy loaded images. I'd guess with service workers you could do lazy loaded images as well, and then at least you're likely to have them on your second successful navigation, but yeah:

It's becoming harder and harder these days to progressively enhance websites as we seem to depend on scripting more and more for the core functionality of our websites.

Only a browser solution can know in advance the viewport size and determine which images to download immediately and which ones to only download once scrolled into view.

@addyosmani
Copy link

A topic I would like to tease apart is whether lazy-loading of images alone is the most compelling use-case to focus on vs. a solution that allows attribute-based priority specification for any type of resource (e.g <iframe lazyload> or <video priority="low"> ).

I know <img lazyload> addresses a very specific need, but I can imagine developers wanting to similarly apply lazyload to other types of resources. I'm unsure how much granular control may be desirable however. Would there be value in focusing on the fetch prioritization use-case?

@JoshTumath
Copy link
Author

It would definitely be useful to have this for iframes, objects and embeds as well!

As for video and audio, correct me if I'm wrong, but unless the preload or autoplay attributes are used, the media resource won't be downloaded anyway until it's initiated by the user. However, if they are specified, it might be useful to be able to use lazyload so they don't start buffering until they are scrolled into view.

When you mention a more general priority specification, do you mean something like the old Resource Priorities spec? What kind of behaviour are you thinking of?

@smfr
Copy link

smfr commented Nov 8, 2017

I believe Edge already does lazy image loading. For out-of-viewport images, it loads enough of the image to get metadata for size (with byte-range requests?), but not the entire image. The entire image is then loaded when visible to the user.

Would lots of small byte-range requests for out-of-viewport images be acceptable?

@annevk
Copy link
Member

annevk commented Nov 9, 2017

@mcmanus I think the previous comment in this thread is of interest to you.

@shallawa
Copy link

shallawa commented Nov 10, 2017

I have two questions:

  • Will the css background image use a similar attribute?
    .box { background-image: url("backgorund.gif") lazyload; }

  • The image async attribute is discussed here "decode" attribute on <img> #1920. I think the 'lazyload' and the 'async' attributes are very related. The first one postpone loading the image till it is needed. The second attribute moves the decoding to a separate thread and this will skip drawing the image till the decoding finishes. They both have almost the same effect if the image source or the decoded image is not available: the image will not be drawn; only the background of the image element will be drawn. When the image source and the decoded image are available, the image will be drawn.

I can't think of any use of these cases:

async="on" and lazyload="off"
async="off" and lazyload="on"

If any of them is "on", the browser will be lazy loading or decoding the image. In any case, the user won't see the image drawn immediately. So should not a single attribute be used to indicate the laziness for loading and the decoding the image?

<img src="image.png" lazy>
and
.box { background-image: url("backgorund.gif") lazy; }

@JoshTumath
Copy link
Author

Will the css background image use a similar attribute?
.box { background-image: url("backgorund.gif") lazyload; }

I guess that would be a separate discussion in the CSS WG, but at least in the case of BBC websites, the few background images that are used are visible at the top of the page, and therefore need to be loaded immediately anyway.

If any of them is "on", the browser will be lazy loading or decoding the image. In any case, the user won't see the image drawn immediately. So should not a single attribute be used to indicate the laziness for loading and the decoding the image?

It also depends on if these attributes would prevent the image from being downloaded entirely, or whether it would just affect the order in which the images are downloaded. (I think the latter would be much less useful.)

@annevk annevk added addition/proposal New features or enhancements needs implementer interest Moving the issue forward requires implementers to express interest topic: img labels Feb 17, 2018
@Malvoz
Copy link
Contributor

Malvoz commented Feb 24, 2018

The content performance policy draft suggests <img> lazy loading, although they only mention lazy loading of images and no other embeds it seems that their idea is to enable developers to opt-in for site-wide lazy loading.

@othermaciej
Copy link

For a complete proposal, we probably need not just a way to mark an image as lazy loading but also a way to provide a placeholder. Sometimes colors are used as placeholders but often it's a more data-compact form of the image itself (a blurry view of the main image colors seems popular). Placeholders are also sometimes used for non-lazy images, e.g. on Medium the immediately-visible splash images on articles briefly show a fuzzy placeholder.

Also: Apple is interested in a feature along these lines.

@laukstein
Copy link

laukstein commented Apr 5, 2018

@othermaciej in early 2014 I proposed CSS placeholder (similar to background property only applied until loaded/failed) https://lists.w3.org/Archives/Public/www-style/2014Jan/0046.html and still there haven't been any progress related to it.

@bengreenstein
Copy link
Contributor

bengreenstein commented Apr 9, 2018

The Chrome team's proposal is a lazyload=”” attribute. It applies to images and iframes for now, although in the future we might expand it to other resources like videos.

“lazyload” has the following states:

  • on: a strong hint to defer downloading BTF content until the last minute
  • off: a strong hint to download regardless of viewability
  • auto: deferral of BTF downloading is up to the user agent. (auto is the default.)

In Chrome we plan to always respect on and off. (Perhaps we should make them always-respected in the spec too, instead of being strong hints? Thoughts welcome.)

Deferring images and iframes delays their respective load events until they are loaded. However, a deferred image or iframe will not delay the document/window's load event.

One possible strategy for lazyload="on", which allows lazily loading images without affecting layout, is to issue a range request for the beginning of an image file and to construct an appropriately sized placeholder using the dimensions found in the image header. This is the approach Chrome will take. Edge might already do something similar.

We’re also open to the idea of supporting developer-provided placeholder images, though ideally, lazyloaded images would always be fully loaded before the user scrolls them into the viewport. Note that such placeholders might already be accomplishable today with a CSS background-image that is a data URL, but we can investigate in parallel with lazyload="" a dedicated feature like lowsrc="" or integration into or similar.

Although we won’t go into the details here (unless you’re interested), we also would like to add a feature policy to flip the default for lazyload="" from auto to off. This would be used for example for a particularly important <iframe>, where you could do , which would disable all lazyloading within that frame and its descendants.

@JoshTumath
Copy link
Author

@bengreenstein It is great to hear your proposal. I have a couple of questions:

on: a strong hint to defer downloading BTF content until the last minute

Does this imply images will not be downloaded until visible in the viewport (at least on metered network connections)?

One possible strategy for lazyload="on", which allows lazily loading images without affecting layout, is to issue a range request for the beginning of an image file and to construct an appropriately sized placeholder using the dimensions found in the image header. This is the approach Chrome will take.

If width and height attributes are already provided by the author, will that negate the need for this request?

@othermaciej
Copy link

Here's a number of thoughts on this proposal:

  • I wish there was a way to make this a boolean instead of a tristate, since boolean attributes have much sweeter syntax in HTML.

  • Bikeshed comment: If it has to be a tristate, maybe we can have more meaningful keywords than on and off. How about load=lazy, load=eager, load=auto? This also makes it feasible to add other values if a fourth useful state should ever be discovered. And it's also a bit more consistent with the decoding attribute (on which see more below).

  • Many developer-rolled versions of lazy loading use some form of placeholder so that seems like an essential feature. CSS background-image with a data: URL seems like a pretty inelegant (and potentially inefficient) way to do it.

  • It's always possible to see the placeholder state during an initial load or when scrolling fast soon after load on a slow network. So it can't be assumed that "lazyloaded images would always be fully loaded before the user scrolls them into the viewport". This is a good goal but not always achievable. Concretely, I frequently see the placeholder image on Medium posts when on LTE and can sometimes even see flashes on my pretty good home WiFi.

  • It would be good to figure out how this interacts with async decoding. Should lazy-loaded images be be asynchronously decoded as if decoding=async was specified? I think probably yes, as the use cases for sync decoding don't seem to be consistent with lazy loading. At the extreme you could think of lazy as an additional decoding state, though that might be stretching the attribute too far.

@smfr
Copy link

smfr commented Apr 10, 2018

Some additional thoughts:

  • it should be possible to specify a placeholder which is a content image (many sites use a low-res image placeholder and replace with a high-res version)
  • it needs to work with , srcset

@othermaciej
Copy link

Good point. We need to consider <picture> too, where different <source> images may need different placeholders, since they may not all have the same size.

@Ambient-Impact
Copy link

@eeeps I've tested that, and it doesn't seem to always catch all images on the initial load in both Firefox and Chrome. Sometimes it does, but sometimes the browser seems to just download a couple of images regardless, even if the MutationObserver is inlined in the <head>. I don't know if that's related to having the cache disabled while devtools are open, or if it's something to do with the way browsers parse HTML and fire off requests pre-emptively. I'd be interested to find out if anyone else is seeing the same results.

I very much wish we had a way to tell the browser to delay loading images without having to remove the src attribute, which rubs me the wrong way with regards to accessibility and just valid markup.

@jakearchibald
Copy link
Contributor

if it's something to do with the way browsers parse HTML and fire off requests pre-emptively

That's the reason.

@herrernst
Copy link

@verlok

  • the distance ahead of the viewport's "fold" to which the browser should start loading the images
  • the time after which the lazy download should begin, to avoid loading images when the user is scrolling

I think these are mostly things the browser knows best (slow/fast network connection, is user scrolling fast?) and should decide itself.

@Link2Twenty
Copy link

Link2Twenty commented Jan 11, 2019

@bengreenstein with auto would the idea be that, in time, all images could be lazy loaded or would auto just be the same as off?

I'm just wondering what auto actually adds, we're sacrificing the attribute being boolean to add it.

<img src="#" lazyload />
<img src="#" lazyload="on" />

If the three states are required I think @othermaciej's suggestion makes the most sense

load=lazy, load=eager, load=auto

Where missing and invalid values default to auto

Also, this leaves space for more functionality over time like load="onrequest" which could require the user to interact with the image before it is fetched,

@stramel
Copy link

stramel commented Feb 12, 2019

Good point. We need to consider <picture> too, where different <source> images may need different placeholders, since they may not all have the same size.

@othermaciej I'm also curious about the use within <picture> tag. Especially since it contains an <img> tag as well.

I agree with @Link2Twenty, that if we can't do a boolean attribute, I prefer @othermaciej's states.

load=lazy, load=eager, load=auto

@othermaciej
Copy link

@Link2Twenty @stramel I also still agree with myself! I don't think a good justification has been given for the on/off/auto tristate. It is generally a confusing pattern in API design instead of having named states.

And we will regret on/off/auto badly if there is ever a fourth state. Just to raise some crazy hypotheticals: what if there was a "manual" state that would only load when explicitly asked via a DOM API? Or maybe a state that would eagerly load only the metadata?

In fact I don't think any of the points of feedback from my April 19, 2018 comment have been addressed.

@verlok
Copy link

verlok commented Feb 12, 2019

Just to raise some crazy hypotheticals: what if there was a "manual" state that would only load when explicitly asked via a DOM API? Or maybe a state that would eagerly load only the metadata?

As author of one of the most used LazyLoad scripts, I advise we consider these two cases. Some of the script users ask for the ability to load the images via API, and one of the most difficult things when using lazily loaded images is to make them occupy the right space so eagerly fetch the meta data would be a great idea!

So I agree with @othermaciej

@addyosmani
Copy link

addyosmani commented Mar 7, 2019

Re-reviewing the thread, I'm a fan of @othermaciej's tristates (load=lazy, load=eager, load=auto) proposal. I do have open questions about <picture> support.

Looking at how lazy-loading responsive images has been tackled in userland, Lazysizes appears to define lazy-loading behavior on <img> where precautions (data-srcset) are taken to avoid fetching a <source> immediately:

<picture>
	<source
		data-srcset="500.jpg"
		media="(max-width: 500px)" />
	<source
			data-srcset="1024.jpg"
			media="(max-width: 1024px)" />
	<source
			data-srcset="1200.jpg" />
<img
		src="data:image/gif;base64,R0lGODlhAQABA"
		data-src="1024.jpg"
		class="lazyload"
		alt="image with artdirection" />
</picture>

I wonder if our equivalent would be something akin to this:

<picture>
	<source
		srcset="500.jpg"
		media="(max-width: 500px)" />
	<source
			srcset="1024.jpg"
			media="(max-width: 1024px)" />
	<source
			srcset="1200.jpg" />
<img
		src="1024.jpg"
		load=lazy
		alt="image with artdirection" />
</picture>

@mstancombe
Copy link

From a privacy perspective, does this feature give the ability for a host to embed lazy loaded images to determine what parts of a page a user is scrolling to even if they have script turned off? I feel like there is a case for flagging how this can be misused, if that discussion hasn't already taken place. Perhaps browser vendors could disable this in their private browsing modes, but maybe that is out of spec for this issue.

@othermaciej
Copy link

It would probably be a good idea to mention that privacy consideration. Of course, it applies only to the script disabled scenario, because otherwise, Intersection Observer provides the info directly. It's also not exact, because loading has to be started some time before the user scrolls to a particular point, and this may vary between browsers and devices.

@wolfbeast
Copy link

Aside from the privacy issue mentioned by @mstancombe there's another angle that should be considered here:
Lack of actual loading of monetization elements in web pages if they aren't in view. Many display ad services use impressions as a main statistic to pay revenue to webmasters. This will be severely cut if lazy loading prevents server hits if this is handled by the browser instead of handled in tags. I don't think that's healthy for the already damaged display ad industry (due to widespread ad blocking use), will hit publishers in the pocket as the revenue model is forced to change, and will push website owners to more intrusive methods of advertising.

@ryantownsend
Copy link

ryantownsend commented Apr 9, 2019

@wolfbeast a few comments:

1. Does this not improve the situation for the ad industry?

They don’t want to be recording impressions for ads that aren’t actually seen, right? Presumably at the moment, they are factoring in that a percent of ads will be loaded, recorded as an impression but the user never scrolled to see it, therefore the CPMs are artificially lowered. If lazy loading is adopted, reporting would be more accurate and CPMs could increase to cover gap with the lower volumes.

2. If the above is not valid, couldn’t you just set the value to eager (based on @addyosmani’s comment above / blog post) or off (as per the original discussion) to hint to the browser that the advert isn’t a candidate for lazy loading?

@PrinsFrank
Copy link

PrinsFrank commented Apr 9, 2019

@mstancombe I am happy to see privacy considered here, i would expect this to be discussed earlier but you're the first one to mention it as far as i can see.

@othermaciej I agree that this is only applicable in an environment where javascript is disabled. I can't find any recent numbers about the total amount, but i am among the approximately 1 percent of people that browse without javascript enabled regularly for privacy considerations.

I think we should also consider that this feature opens up viewport size fingerprinting as well.

@Sora2455
Copy link

Sora2455 commented Apr 9, 2019

@PrinsFrank Can't you no-JS fingerprint viewport size right now with something like this?
<picture>
<source media="(min-width: 650px)" srcset="tracking/large.png">
<source media="(min-width: 465px)" srcset="tracking/medium.png">
<img src="tracking/small.png" alt="">
</picture>

@PrinsFrank
Copy link

@Sora2455 I hadn't even considered that, but you're right!

@wolfbeast
Copy link

wolfbeast commented Apr 9, 2019

@ryantownsend re:

  1. I think it actually allows the ad industry to treat publishers unfairly. Unless this behavior is consistent in all browsers, they are going to take the lowest common factor to calculate revenue for publishers (= website owners) which means that it will at the very least during a period of transition, but likely long after that, result in lower payouts than fair for publishers. So yes if you want to approach this from the industry side of things it'll be advantageous, but not from the publisher side of things. As said the revenue model will be changing as a result -- the "% viewable" which is currently used to compensate for "below the fold" content will not represent the right amount if this proposal lands (because then that would always be 100% in browsers that support it).

  2. I did say in my post "if this is handled by the browser" which was discussed above, i.e.: if there is no control in the content. Even so, IIUC these values are only hints, not directives to the browser.

@stramel
Copy link

stramel commented Aug 6, 2019

Is there an event or way to determine when a "lazy img" has started to load its source?

@domfarolino
Copy link
Member

domfarolino commented Sep 15, 2019

@stramel I believe the answer is no, just as you cannot tell when a normal img begins to load its source (but I could be missing something?)

@myakura
Copy link

myakura commented Sep 27, 2019

@stramel @domfarolino can't we use Resource Timing for that?

@domfarolino
Copy link
Member

I guess you probably could..

@Malvoz
Copy link
Contributor

Malvoz commented Oct 2, 2019

I'd probably be non the wiser reading the spec so I'll just ask, is there anything for UAs to consider when authors both preload and lazy-load a resource? Would they benefit from having that clarified (if not already) in the spec?

@eeeps
Copy link
Contributor

eeeps commented Oct 23, 2019

My mental model of what will happen:

  1. preload grabs the image asap and puts it in the preload cache.
  2. The DOM parser (or speculative preparser) sees the <img loading=lazy> and waits to initiate its own load.
  3. Layout happens and if the image is in/near the viewport, a load is kicked off.
  4. The src URL is found in the preload cache, and the img loads near-instantly.

I think that makes sense? And has, at worst, a few frames' worth of penalty vs using preload without loading=lazy. @yoavweiss would know for sure...

@Malvoz
Copy link
Contributor

Malvoz commented Oct 23, 2019

Thanks @eeeps!

at worst, a few frames' worth of penalty

I've asked Lighthouse to consider the scenario in GoogleChrome/lighthouse#9516, so that (if the audit is implemented) users who run site audits will be flagged for indecisive loading. But, perhaps it'd be wise(r) if the spec would mention it, asking the UA with a "SHOULD", or "MAY", to notify developers with a console warning in these cases?

@yoavweiss
Copy link
Contributor

@eeeps that's indeed what I'd expect to happen.

FWIW, Chromium and WebKit are already likely to show console warnings in those cases, if the image wasn't actually used a few seconds after onload.

@domfarolino
Copy link
Member

domfarolino commented Nov 18, 2019

But, perhaps it'd be wise(r) if the spec would mention it, asking the UA with a "SHOULD", or "MAY", to notify developers with a console warning in these cases?

The only issue is that it is not possible spec-wise today. We'd need a way to know whether a lazy-loaded image's response was served from the preload cache or not, and if so display the warning you mention. (Currently the Fetch Standard does not know about a preload cache). With that said, I am comfortable with the Chromium/WebKit warnings issued when preload requests are not used. Would be interesting to see what lighthouse could come up with though.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
addition/proposal New features or enhancements needs implementer interest Moving the issue forward requires implementers to express interest topic: img
Development

No branches or pull requests