-
Notifications
You must be signed in to change notification settings - Fork 42
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
background fetch can start service workers #145
Comments
In general I like the idea of introducing a deferred event queue to service workers. The tricky part here is: addEventListener('backgroundfetchsuccess', (bgFetchEvent) => {
bgFetchEvent.updateUI({ title: 'Download complete!' });
}); I guess we could get developers to provide the 'fail' and 'success' titles/icons ahead of time. I'd need to think more about the capabilities we'd lose as a result. If we did this, we'd also need to think about backwards compatibility. |
Can you summarise the kind of JavaScript you're worried about here? Would an aggressively short amount of execution time be acceptable? It doesn't seem like a tracking vector, because the download itself can do that. |
Executing JavaScript, thus spinning an origin-specific process, is expensive. This goes against the goal of bg fetch to improve battery life. This also extends the time the user gets tracked. And there might be some time between the download ends and the actual JS execution. If user is aborting a fetch, maybe for privacy reasons, it does not seem great to execute some JavaScript. In fact, chances are pretty high in that case that the user may never get back to the page, so there is clearly no need to notify the page proactively. If user allows the web site to start bg fetch in service workers in the background, this becomes a very powerful tracking tool and can be abused similarly to bg periodic sync. The user is not deciding when the download fails/finishes, the server can control that. |
If we end up deferring events we should be careful not to create a storm of activity right in the middle of the next load of the site. Seems like a potential perf footgun. |
Of course, if you defer the events too long then the site cannot render UI indicating the downloaded resources are available. In regards to battery, it seems some amount of deferring without requiring a full delay to next window open would be adequate. |
Are you thinking of something like the badging API? |
Also, is Chrome planning to start service workers periodically to fire update events, or just the final events? |
I think @wanderview is referring to something like:
Due to the events happening in parallel, it's likely that the user will see the
The service worker can only be started by events on the service worker global. The "update" event is on the individual background fetch registration, so the service worker isn't woken for these, and even if it did, there wouldn't be an object there to dispatch the event on. There isn't a top level 'background fetch update' event. |
Agreed the sequencing of events is to be carefully studied. AIUI, the 'download complete' is displayed so the fetch succeeded so there is a backgroundfetchsuccess event in the service worker event queue but the service worker is not started just for this event. |
Are you suggesting that the deferred event queue must be exhausted (plus any |
backgroundfetchclick and backgroundfetchsuccess are enqueued in the same functional event task source so there should be no race in the firing of these events. The navigation fetch will only happen after the service worker decides to do a navigation. At this point, I would hope the service worker has all necessary information to suppress any potential reflow. Currently, there is one task source for all functional events, maybe there should be one for BackgroundFetchEvent specifically. In that case, backgroundfetchsuccess might not always trigger running the service worker, but backgroundfetchclick would. This would then trigger firing events in the order of the task source event queue. That said, this exhibits the issue that backgroundfetchsuccess and backgroundfetchclick are redundant in that scenario. This requires careful handling from the web application which is a bit unfortunate. Not sure how we can improve the model here. |
Event queues aren't held back by
I can't think of a way around this other than making the deferred event queue special in some way, in that it must be emptied (including async stuff) before functional events can be called. However, if the developer does anything potentially long-running in these events (like a fetch), they could lock up their app. The closest thing we have to this is the |
A slightly less-worse alternative is to create an API like This allows the developer to be smarter about when they wait, although of course it requires them to get it right. Oh, and not get themselves in a deadlock. |
Right, it is also true we want to have backgroundfetchclick be processed as fast as possible. |
Hm, but in the case of a photo album upload, or a video upload, the responses to the background fetch could contain important information, like the URL to the newly created album, or the URL to the published video. That's why I landed on something like Although, if something like |
Right, the question I am asking is whether it is bad to have the service worker know that the bgfetch is a success before the success event is fired, for instance in a message/fetch/bgfetchclick event handler. AIUI, the spec is not prescribing that so service worker scripts would need to deal with it anyway. We might be digressing a bit though. Are we fine with the following requirements?
|
The spec currently updates any bgfetch instances before dispatching the success event, which seems like the right way around. But I'm not sure what they'd have to deal with. Here's the order of things in the current spec:
If the click event happens before step 3, then app storage will not contain info for the complete bgfetch. So, maybe the app will choose to open a 'downloads in progress' page. It's possible that the bgfetch success event has fired by this point, but the processing hasn't completed. But that's fine, the download UI isn't showing 'complete' yet, so sending the user to a 'downloads in progress' page is fine. Whereas, as far as I can tell, what you're proposing would move the UI update (step 4) before step 2. This means the UI is showing complete, when the data hasn't yet been processed. If a click event happens before that processing, then it may act on outdated info. It seems bad if the user clicks a 'download complete' UI, and they're taken to a 'downloads in progress' page. That's why I've been suggesting something like
I get the privacy implications of firing these events sooner, so yeah I'd like to find another way that works. I don't think we've found a great option yet, though.
It can already update without running the service worker, in reaction to progress events. But, I guess you mean switching state from "in progress" to "complete". I don't think this is a requirement, but it may be something we need to order to make the other requirement work. Right now, the UI is updated in the success event. If we end up delaying the success event, we need some other way to update the UI. But that seems to mean updating the UI before the responses are processed by the app, which is where we get the race with things like click events. |
@annevk in this thread we're looking at alternatives to starting the service worker in the background. I think Mozilla are interested in this too, right? Maybe even for push? If there's enough interested I can organise a call around this. |
Yes, Mozilla would like to avoid introducing new opportunities for starting service workers or keeping them alive when there are no fully active documents. cc @asutherland |
The more I think about this, the more I think an API like this is useful: await swRegistration.backgroundFetch.pendingEventsDone(); Where Developers could use this in a page, or as part of a service worker event like displayLoadingSpinnerAfterDelay();
await swRegistration.backgroundFetch.pendingEventsDone();
const data = await getOfflineMediaDataFromIDB();
displayOfflineMedia(data);
cancelLoadingSpinner(); It feels like the same mechanism could be used in push: await swRegistration.pushManager.pendingEventsDone(); …if we wanted to explore the same 'delayed events' pattern there. I'm going to talk to some folks who are experimenting with background fetch, to see if this works for them. Shout up if you think I'm heading in the wrong direction. |
This seems fine to me. |
The specification defines several events that go directly to the service worker, for instance if fetch fails, is aborted, finishes or the fetch UI is clicked.
This is problematic as this might trigger execution of arbitrary JavaScript in the background, without the user being potentially aware of any of this.
In general, background fetch should not require to extend lifetime of service workers or start service workers that have no other client running.
The specification could allow/enforce a model where background fetch would never have to launch service workers by itself. This might require to delay firing events until the service worker is running. Some events, like the update ones could be skipped.
A typical flow where the service worker is not running could be something like:
The text was updated successfully, but these errors were encountered: