-
Notifications
You must be signed in to change notification settings - Fork 378
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Lazy Custom Element Definitions #782
Comments
See #444, which this is pretty much a duplicate of. |
I don't think this is useful for very large apps with 1000+ components, and for small and midsize apps, I don't think this is really needed. The fact that we have all kind of bundlers these days, and they continue improving their capabilities seems to go against this proposal. You can just have a logic to load the pieces that you need based on the user interaction (at the app level) rather than the production of the DOM as the trigger to load the new components. Seems backward. |
@caridy The ability to separate usage of a component from loading of a component is one of the key benefits of HTML and custom elements. I can just write I like this proposal as it provide a hook into usage. As the OP explains, creating a stub can lead to a lot of code. |
don't get me wrong @matthewp, I'm sure that if this feature exists many folks will use it, but the question is whether or not such high level API should be provided by the UA? IMO, it should not, instead, I think low level primitives, like what is being discussed in whatwg/dom#533 is far more important because it allows app code, frameworks, and libraries to implement |
@caridy That we have a lot of bundlers with improving capabilities and that interoperably do code-splitting based on dynamic import is one of the motivators for this proposal. That doesn't detract from this at all.
Please take a look at React.lazy() and Stencil.js for different approaches here. Stencil implements lazy loading, but only for Stencil components. React.lazy() just works for any React components. That's the ease-of-use I'd like to add here. |
@justinfagnani do you agree that having a low level API that allows us to do that is good enough? consider that it opens the door for others to rely on such API to do their own thing that doesn't necessarily requires the usage of a custom element, not everyone uses a custom element for everything in the page. |
I u-turned on this when I suggested it a few years ago, mainly because I realised that you can't really be sure that an element definition is loaded when you want to do anything with it programmatically. When I suggested it, we had an approach where we would define a base class, I'm an advocate of designing custom elements with the principle of least astonishment in mind, meaning that I try to make them behave how I think a native element provided by the browser would behave, mainly by linking attributes to properties, but also in other ways. Consider the following: <my-element id="myEl" foo="bar"></my-element>
<!-- load the script that sets up element definitions -->
<script src="custom-elements.js"></script>
<script>
console.log(myEl.foo);
</script> I expect the That being said, I do think that |
What about a way to unload element definitions? Seems like we should do something like this symmetrically: use memory when needed, but also free memory when no longer needed. |
It might be useful for granularity on what triggers these registers too, for example you might have a template that uses a component in a way that it's content has been pre-rendered anyway in which case you don't really need to load the javascript until an attribute that would cause an update changes (of course you can always load it sooner if needed). As such it might be worth just accepting any options from mutation observer init to the function: customElements.defineLazy('my-element', () => import("./MyElement.js"), {
attributes: true,
attributeFilter: [/* attributes I'm actually gonna change */],
})` |
In practice, it is convenient to adhere to the agreement on the naming of custom tags and their accessibility at url address. Then this is one rule for many custom tags, which can be expressed in one loader JS modules // For many internal components
customElements.defineLazy('*', tagName => import(
`/modules/${tagName}.js`
));
// For many external components
customElements.defineLazy('vendor-*', tagName => import(
`https://vendor.com/modules/${tagName}.js`
)); |
The trigger for // this code should call `customElements.defineLazy`
const myComponent = document.createElement('foo-bar') After resolved |
@Jamesernator I think finer-grained loading should be left to the element implementation itself. One goal of this proposal is that in combination with a Scoped CustomElementRegistry, the use-site of an element can lazily define it in a generic way without knowing anything about its implementation (like what attributes it cares about). If an element itself wants to lazy-load parts of its implementation based on lifecycle, attribute or children changes, or events, then it can do that internally. |
@justinfagnani I don't entirely agree, assuming declarative (and hence pre-rendered) shadow DOM happens it would be useful for a pre-renderer to have an API it can target that can forgo even downloading the component unless it actually changes in a meaningful way to the component that the component would want to respond to. Obviously a pre-renderer could just dump it's own Having a primitive like that as part of the platform would be great because it means you only pay download costs when you actually need to modify the element rather than unconditionally loading parts of every single component that need to repeat the same logic in potentially many different files. |
One observation made during the F2F is that Firefox has an internal API that matches the alternative, a callback when a new element name is seen. Another observation was that this should likely coexist with scoped registries. |
Currently implementing something very similar to A simple primitive that might make this a whole lot easier would be adding the ability to lazily define //what i want
const LazyEl = (resolveFn) => class LazyElement extends HTMLElement {
constructor(){
super();
resolveFn().then(impl => {
impl.default.attrs.forEach((attr) => this.constructor.observeAttribute(attr)); //this doesn't exist :(
})
}
}
customElements.define('my-lazy-element', LazyEl(() => import('./my-comp-impl'))); //what i have to do
const LazyEl = (resolveFn, attrs) => class LazyElement extends HTMLElement {
static get observedAttributes(){ return attrs }
constructor(){
super();
resolveFn().then(impl => { ... })
}
}
customElements.define('my-lazy-element', LazyEl(() => import('./my-comp-impl'), ['foo', 'bar])); |
@robwormald one other way to implement lazy definitions without having to know the observed attributes up-front is to use a mutation observer. Something like this: // caution: completely, absolutely, untested and never-run code
const lazyDefinitions = new Map();
const lazyDefinitionObserver = new MutationObserver((records) => {
for (const record of records) {
for (const node of record.addedNodes) {
const walker = document.createTreeWalker(node, NodeFilter.SHOW_ELEMENT);
while (walker.nextNode() !== null) {
tagName = walker.currentNode.tagName;
const lazyDefinition = lazyDefinitions.get(tagName);
if (lazyDefinition !== undefined) {
lazyDefinitions.delete(tagName);
(async () => {
customElements.define(tagName, await lazyDefinition());
})();
}
}
}
}
});
lazyDefinitionObserver.observe(document);
const originalAttachShadow = HTMLElement.prototype.attachShadow;
HTMLElement.prototype.attachShadow = function(options) {
const shadow = originalAttachShadow.call(this, options);
lazyDefinitionObserver.observe(shadow);
return shadow;
};
customElements.polyfillLazyDefine = (tagName, loader) => {
lazyDefinitions.set(tagName, loader);
}; cc @bicknellr who's been looking into polyfilling this |
The version that @sorvell and I have been working on requires that the definition is returned synchronously - so, somewhat different - but I think it's helped expose some corner cases that would also be relevant to an async version. The tricky bits I ran into seem to be mostly around situations other than "the browser came across some name and wants to call its constructor" that arguably need to trigger the lazy definition getter. (But I think we didn't run into more complicated timing issues only because we decided not to support async initially.) Particularly, CustomElementRegistry's For example, what does calling Another few corner cases stem from a weird behavior of try {
customElements.define('custom-element', class extends HTMLElement {
get connectedCallback() { throw new Error('Oh no!'); }
});
} catch (e) {
console.error('Caught an error:', e);
}
customElements.define('custom-element', class extends HTMLElement {
constructor() {
super();
console.log('upgraded');
}
});
document.createElement('custom-element'); // Logs "upgraded". If a lazy definition for a particular name already exists and a user tries to call There's also the question of what On the feature in general, I feel like it's going to be difficult to use if you have to watch for individual events from every element some tree to know that the tree is completely ready and you can show it to the user without it being broken. Maybe there needs to be some function to get a promise that resolves when all the descendants of a particular node have had their definitions fetched and run: const someTree = template.content.cloneNode(true);
customElements.upgrade(someTree);
await customElements.waitForAllTheLazyDefinitionsPlease(someTree);
otherPlace.appendChild(someTree); |
It looks like Firefox has implemented lazy element definitions as an internal-only API, for performance reasons: https://bugzilla.mozilla.org/show_bug.cgi?id=1460815 I think this is a pretty good indicator of the need for and utility of this feature. |
Yup, see also #782 (comment). |
I'm not for or against this, but I wanted to describe a potential pitfall: This feature can cause a loading-pyramid: component A renders elements B and C into DOM. A this point, components B and C begin to load. Component B uses components D and E and component C uses F and G. Once B loads, then the browser can start loading D and E. Once C has loaded, then the browser can load F and G. In contrast, with well-planned route-based loading, when the user lands on a page that uses the top-level A component, the page can go and load A, B, C, D, E, F, and G all in one request at the same time (f.e. inside the same bundle), and thus the experience will load more quickly. This might be something to consider as a user of a new lazy feature, especially because using native ES modules would make it easy to do this, and composing many components together the DOM tree could get many levels deep, causing many HTTP calls instead of one for a planned route. |
@trusktr you are absolutely right, this is a bit of a hazard. This is also a hazard with dynamic import in general. The key here though is to allow dependents to specify that they can deal with async loading of a definition, which then allows flexibility for tools like bundlers. An app may still combine multiple lazy loaded components in the same bundle... I've seen proposed systems for doing this dynamically / statistically based on usage or analyzing the content to be served. |
DCE with source URL would be capable of lazy load. <custom-element src="some/url" tag="abc"></custom-element>
<abc></abc> The browser would load DCE definition and materialize the class on its own discretion accounting the hydration rules. |
What if instead of the lazy callback needing to return a custom element, the callback received an object with an API it could use to optionally define the element later? The platform could invoke the callback when it first sees the element, but the lazy callback could choose when it wants to actually define it. If it wants to use MutationObserver semantics, it would just return the custom element as shown in the original proposal. This provides nice ergonomics for the simple case. If it wants to use IntersectionObserver semantics, it would create an observer and then use it to call define at a later point in time. It could use any mechanism to determine when to define, such as based on attributes, mouse proximity, or even perhaps defer loading using the microtask queue so that it could batch requests for multiple element definitions that were seen by the browser in the same tic. The API could look something like this: interface IDefineLazyElement {
readonly element: HTMLUnknownElement;
define(type, options): void;
}
lazyDefine(tagName: string, callback: (d: IDefineLazyElement) => Promise<void | Constructor>): void; Additionally, we could explore |
This is already possible with promises. You can wait for arbitrary things before returning a class. Anything you can do with an such an object you can do with returning a Promise. I personally like coalescing on fewer async primitives like promises. An object that can be used to define an element is not so different from a callback, and I strongly prefer Promise-based APIs over callback-based ones because of better composition and integration with the language. I think promises give a nice API for the simples case of a default export: customElements.lazyDefine('my-element, () => import('my-element')); Pretty nice API for named exports: customElements.lazyDefine('my-element, async () => (await import('my-bundle')).MyElement); And you can do arbitrary async computation if you need: customElements.lazyDefine('my-element, async () => {
const module = await import('my-module');
const myClass = await computeMyClassSomewhow(module);
return myClass;
}); |
Good points! I think that addresses most of what I was proposing. But I think there are some other scenarios. In particular:
Seems like we might at least want the callback to receive the |
Another point is that the |
customElements.lazyDefine('my-element', async tagName => {
const { MyElement } = await import(tagName);
customElements.define(tagName, MyElement, ... );
}); |
@uasan Seems like that solution could create some challenges around determining whether a double define of the same element is happening. Does it also open up the possibility of race conditions? |
To reiterate with less words. Using promise and async is great, that it's not using callback too. If we can avoid adding complexity. But, what about:
But I guess, besides the name, and handling registration at execution when promise is resolved. All the rest can be done with Context API and DOM Events that the importing code can setup, listen to and handle. |
@uasan raises a good point. The lowest level primitive here is a callback that runs the first time a potentially custom element is seen. That callback can then call In the spirit of using promises and being consistent with existing API like // lazy define my-element:
(async () => {
await customElements.whenCreated('my-element');
customElements.define('my-element', (await import('./my-element.js')).MyElement);
})(); I still think that const lazyDefine = async (tagName: string, loader: () => Promise<typeof HTMLElement>) => {
await customElements.whenCreated(tagName);
customElements.define(tagName, await loader());
};
lazyDefine('my-element', async ()=> (await import('./my-element.js')).MyElement); There are potential race conditions is any API here, because an element could be defined by a different code path in between the callback being invoke and it either calling |
the customElements.lazyDefine('my-element', async tagName => {
const { MyElement } = await import(tagName);
customElements.define(tagName, MyElement, ... );
}, { retry: { delay: 5000 } } ); |
@justinfagnani I like the idea of the Additionally, this enables application-specific control of retries and similar behaviors, which @uasan brought up. const ele = await customElements.whenCreated(tagName);
defineBasedOnUserBehavior(
ele,
ele.getAttribute("data-load-behavior"),
ele.getAttribute("data-bundle-src")
); |
I see. I like the To confirm, With this, if there's a problem getting the implementation, a retry mechanism, I agree with @uasan and @EisenbergEffect. Maybe an exponential backoff be useful there too. When a UA becomes offline, we might not want to eat up resources retrying. For CSS we might want to avoid de duplicating around, how would we inject inside? In my experiments, I would have a project with its own CSS, and my new component need that CSS, I'd have to load them both from the component and from the document host realm. The component we're lazily loading might not have knowledge where that CSS is. Probably using Context API. |
I think this API should should only be called/resolved once, like |
In that case I would have to use a MutationObserver on every single Shadow DOM in my application. I don't think that's a great solution. Per @renoirb Why not just have an event that fires whenever an unknown element appears? Then I can do whatever I want with that information. An event like that would be useful for non web component scenarios as well. I can see various front-end frameworks being able to make use of that. |
Autoloaders in PHP are built on such event (emit on access to undefined classes), it's a really simple, understandable and powerful pattern |
I'm not sure this API should force its users into delaying their definitions until the next microtask checkpoint or changing the construction / upgrade order of trees depending on whether or not they contain undefined custom elements, if those behaviors aren't required. If This could be useful if you have multiple large trees that you expect to insert into the page as a result of different user actions and you want to preload definitions used in those trees after the initial page is idle but defining them all immediately after the preload completes would be too expensive. If this API doesn't allow you to provide a definition synchronously, then your large trees that have definitions that could have been supplied immediately will all have to contend with being forced to upgrade in definition order. If the API does allow you to provide a definition synchronously, then you can structure things such that your tree is still upgraded in tree order, if that's important to you. Tree order might be important if your heavy tree contains components that expect to signal each other using events once they're connected. In the example below, maybe const definitionCache = new Map();
customElements.lazyDefine('collapsible-directory', () => definitionCache.get('collapsible-directory'));
customElements.lazyDefine('file-row', () => definitionCache.get('file-row'));
const readyToShowHeavyDialog = import('./definitionsUsedInHeavyDialog.js')
.then(({definitions /*: Map<string, CustomElementConstructor> */}) => {
definitions.forEach((k, v) => definitionCache.set(k, v));
});
someButton.addEventListener("click", async () => {
await readyToShowHeavyDialog;
const dialog = document.createElement("dialog");
// This will upgrade in tree order because all of the definitions
// were supplied synchronously:
dialog.innerHTML = `
<collapsible-directory>
<span slot="name">outerDir</span>
<file-row>file1</file-row>
<file-row>file2</file-row>
<collapsible-directory>
<span slot="name">innerDir</span>
<file-row>file1</file-row>
<file-row>file2</file-row>
<file-row>file3</file-row>
</collapsible-directory>
<file-row>file3</file-row>
</collapsible-directory>
`;
document.body.appendChild(dialog);
dialog.showModal();
}); Also, I know that upgrade candidate lists were removed from the initial version of the custom elements spec, but IIRC that was mostly because the lifetime of those candidate lists was unbounded. With an API like this, where you could supply a definition synchronously, you could use candidate lists to avoid full-document tree walks because they would only need to exist for the time it takes the callback to return. The callback is called only because no element with the given tag name has been seen in an upgradable context before, so that means that if the callback returned a definition synchronously, then the only other elements with that tag name that would need to be upgraded would be those that would have attempted to upgrade during that callback, letting you avoid the full-document walk at the end. (The browser might have to sort them though.) If the callback returns Footnotes
|
Re, Events: CustomElementRegistry is not currently an EventTarget, though it could be. But I think events are less ergonomic because you can't specify what tag names you care about, so you have to filter in the event handlers. I would still argue to keep the API consistent with the existing APIs. |
I would be interested in seeing if we can make the registry an EventTarget. Let it dispatch events in both scenarios: when an unknown element is encountered and when an element is defined. The promise-based APIs could easily be defined as sugar over the event with a filter on tag name. Also, very easy to provide library-specific convenience over top of these APIs. The main thing about this approach is that it seems to enable all the use cases. |
I could get behind an event if we had a cheap way to query the registry for unregistered elements already on the page. Without that, an event would require a discovery method and DOM traversal to find elements added before the listener. I like the ergonomics of @justinfagnani’s initial post. Pre-registering and offloading the import mechanism to the user would satisfy all the use cases I currently have for this feature. Related: I recently built an auto loader for Shoelace using a mutation observer. It has a similar problem in that it requires an initial discovery method that runs since I can’t guarantee the auto loader initializes before custom elements are in the DOM. It uses A promise-based syntax makes auto loading trivial, as long as the tag names are known ahead of time. That said, there still could be a gap for unknown elements that events would solve. But again, we’d need a low cost way to query for unregistered elements in a document for this to be a complete solution. |
Why would WRT to extending the Custom Element APIs, would overloading document.addEventListener('unknown-element', (e) => {
if (e.target.localName === 'my-element-of-interest') {
customElements.define('my-element-of-interest', new Promise((resolve) => {
if (document.readyState === 'complete') return resolve(import('./my-element.js'))
document.addEventListener('load', () => resolve(iimport('./my-element.js'))
})
}
}) |
@keithamus That works for me. Also solves the issue of deeply nested elements and custom registries. |
For me one of the larger motivations for something like |
That sounds a lot like mutation events, which afaik are untenable.
What problem are you referring to here? |
I think we need to back up and start with some scenarios from the real world. We're each jumping to APIs without adequately explaining the scenarios that are driving the need for a new capability. What I'm trying to say is that the scenarios I have aren't solved by the simple lazy load API proposed here. However, they are solved by the alternate proposal of a more generic callback for unknown elements. I am happy to contribute some scenarios from the real world. @justinfagnani Do you want me to drop them here or do you want to collect them in an alternate location? |
Yeah, can you list the scenarios? I don't see how deeply nested elements would effect any API shape we've discussed though. Custom element registries are already invoked for any potentially custom element. This is how non-lazy definitions work. So in any case where an element would have potentially upgraded the registry could see a lazy definition and invoke whatever callback/event/Promise the API uses. |
There may not be a problem with custom registries. But it feels to me like the broader set of scenarios that need the more general hook for unknown elements is not being understood well. I'll add some scenarios here when I get a block of time to write it up. Maybe tomorrow or early next week depending on how things go. |
Here's a scenario from a real-world, well-known site that is heavily invested in Web Components, where the architecture could really benefit from some sort of lazy loading platform feature as is being discussed here... Let's say you are building a feeds experience. Each feed is customized to a specific user. The feeds themselves are an infinite scrolling list of cards. Each card is a Web Component. The experiences that these card components afford are highly diverse and there are hundreds of different cards with the number growing all the time. There is a central engineering team that creates the shell and many of the cards, but a large portion of them are created by other engineering teams and partners, which have a business case for plugging into the overall feeds experience. When any given user visits the site, the frontend queries a service to find the specific set of content for that user at that point in time. This content can contain any combination of the hundreds of cards available from the library, but the specific combination cannot be determined ahead of time, as it is based on preferences, current events, time of day, external advertising services, etc. As the user scrolls, additional data is loaded which the client then dynamically renders. At any point of in time, the core engineering team or partners may launch new card experiences and the backend services may begin sending back data for cards that did not exist at the time that the original server request was made by the client. I won't get into the details of how this is/should be handled today with the current standards, but it's very non-trivial and less than ideal. However, it seems that one way the existing system could be simplified and improved is if the frontend could simply detect an HTML element that showed up which was undefined. It could then send a request to the server to get the batch of definitions (all unknowns seen within a current tick). Please note that upfront defining all lazily-loadable element names is not feasible in this context. Again, there are hundreds of experiences, which could amount to thousands of elements, and those can change while the user is interacting with the site at any point in time. The use cases are even more interesting though. As the client may not want to simply load any element it doesn't have a definition for. Certain elements it may want to delay load until certain things happen. A typical example is the settings UI for the feeds experience. Most of the time that a customer visits the site, they don't mess with their settings. So, it's wasteful to load it. Instead, you want to load that set of components when some app-specific heuristics determine that the user is about to interact with the settings. Similar rules may also be applied to various cards within the library, which have more or less complex interactions. For example, a quiz/survey card may not want to load all its behavior unless a user signals that they are about to engage with the quiz/survey. A card that contains a casual game may not want to load until the user signals they want to play. The rules for what to load and when are application and experience-specific and are constantly being changed based on analysis of RUM data. In fact, these rules themselves may be user-specific. |
As mentioned in #716 and discussed at the 2018 Tokyo f2f, lazy definitions would be a useful addition to the specs.
What
A lazy definition would take a tag name and an async function that returns the class to be defined lazily. When the tag is first encountered the browser will invoke the function:
For ergonomics, we may want to support a default export as the element class:
Why
Dynamic imports are an important way to load code at the point in time when it's actually needed, and not cause unneeded cost and latency by loading earlier than that. Modern build tools and bundlers are able to bundle and preserve the dynamic import boundaries as code-split points. Right now this works great for situations where application code can explicitly know that it needs the new code, so we typically see it used when there's a user action or around navigation with route-based loading and therefore route-based code-splitting.
Maybe we'd see code like this to navigate:
This is fine as long as the product-page bundle isn't itself too large, and/or most of the components in the bundle are used on every product page. But in some cases a page/route may be far too coarse grained to get effective splitting. ie, any one product page may use only a fraction of the potential features of a product page.
The recent trend is to take advantage of tooling support for dynamic import() to do component-base code-splitting.
React recently added a
lazy()
component wrapper:When
OtherComponent
is used, its implementation will be fetched.In the Web Components ecosystem, Stencil builds lazy-loaded components by default. They generate a stub that is registered and loads the implementation on first creation.
On the Polymer team we have experimented with this as well with something called "SplitElement": https://github.com/PolymerLabs/split-element
These approaches work by registering a stub class with lifecycle callbacks and
observedAttributed
known at definition time.The problem with the stub approach is that each element must be written to be lazy loaded, you can't just lazy load an arbitrary component. Then the implementation is non trivial as the stub has to perform something akin to upgrading between itself and the implementation class. If you want to support constructors in the implementation class, you also have to do the "constructor call trick".
It would be much nicer and more general if the platform supported this directly.
Details
defineLazy:
CustomElementRegistry#defineLazy(tagName: string, loader: () => Promise<CustomElementDefinition>)
Loading
When a lazy-defined element is created, it's associated loader function is called.
element-definition-loading Event
The elements created before the definition is loaded also fire an event:
element-definition-loading
. This allows generic code above the lazy elements in the tree to display user affordances while code is loading, like a spinner.The event should carry a Promise that resolves when the element is upgraded. This Promise can be the same Promise returned by
whenDefined(tagName)
.Alternate Solutions
Another approach is to allow for a generic callback for potentially custom, but undefined, elements. This callback could then load and register the element definition. It's more general, but probably less ergonomic.
Polyfilling
This feature can be polyfilled by using a MutationObserver to watch for lazy element instances. In order to work in ShadowRoots, the polyfill will have to patch
attachShadow
.The text was updated successfully, but these errors were encountered: