You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
If possible you should try and avoid using childNodes and attributes. Both are computed functions that are very expensive to use.
For example, rather than using childNode, use firstChild and loop through via nextSibling as DOM nodes are based on a two-way linked list – not a graph/array structure.
Also, I did some tests recently and its far more effective to create a lightweight "vdom" object upon inspecting and calculating what exists in a real DOM node and storing this in a WeakMap based on the DOM node itself being the key. You'll never get greater performance from accessing the "vdom" object vs DOM node properties as they are aren't monomorphic and can't be properly optimised by JavaScript engines. So rather than getting an O(1) from doing dom.nodeType, you're likely to get an O(log n) or worse depending on the JavaScript engine's optimisation path.
The text was updated successfully, but these errors were encountered:
For example, rather than using childNode, use firstChild and loop through via nextSibling as DOM nodes are based on a two-way linked list – not a graph/array structure.
Oh, that's cool! Had no idea 😁
Also, I did some tests recently and its far more effective to create a lightweight "vdom" object upon inspecting and calculating what exists in a real DOM node and storing this in a WeakMap based on the DOM node itself being the key
Very interesting. What do you mean by "monomorphic" - I'm not familiar with the term in this context.
I like the idea you sketch out with WeakMap, though it's a shame it's not supported widely enough yet. We'd have to pull in a polyfill and given we seem to optimize for simplicity & file size it's probably not worth it. I was wondering if it'd be possible to do the same with an LRU-cache, but given we'd store the DOM nodes longer than we need to the potential of leaking means we could end up being worse off - I'd rather not hog too much memory haha.
To be honest I think between isSameNode() caching and requestIdleCallback in polite-element we should already be fast enough for most applications 🚀 - thanks heaps for the feedback!! ✨
If possible you should try and avoid using
childNodes
andattributes
. Both are computed functions that are very expensive to use.For example, rather than using
childNode
, usefirstChild
and loop through vianextSibling
as DOM nodes are based on a two-way linked list – not a graph/array structure.Also, I did some tests recently and its far more effective to create a lightweight "vdom" object upon inspecting and calculating what exists in a real DOM node and storing this in a WeakMap based on the DOM node itself being the key. You'll never get greater performance from accessing the "vdom" object vs DOM node properties as they are aren't monomorphic and can't be properly optimised by JavaScript engines. So rather than getting an O(1) from doing
dom.nodeType
, you're likely to get an O(log n) or worse depending on the JavaScript engine's optimisation path.The text was updated successfully, but these errors were encountered: