Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ShadowDOMPolyfill is way, way too intrusive! #346

Closed
d4tocchini opened this issue Nov 14, 2013 · 6 comments
Closed

ShadowDOMPolyfill is way, way too intrusive! #346

d4tocchini opened this issue Nov 14, 2013 · 6 comments

Comments

@d4tocchini
Copy link

As listed in the platform/shadow-dom docs as a known issue;

No live NodeLists. All node lists are snapshotted upon read.

One must assume this is just for query methods within a shadowDOM, not the entire document! But, alas, this is not so... The ShadowDOMPolyfill wraps way too much, including the document object even if no ShadowDOMs exist!? Thus, there are no live NodeLists, as the ShadowDOMPolyfill monkeypatches methods like document.getElementsByTagName and replaces it with the sluggish and static querySelectorAll.

To compound matters, ShadowDOMPolyfill.unwrap(document) throws Error: Assertion failed. How does one find the real document.getElementsByTagName?

Completely eliminating live NodeLists is a perf deal breaker for me and I'm sure anyone else that is writing perf-minded DOM libs that ought to work with Polymer.

The ShadowDOMPolyfill should not require developers to jump through unwrap or .impl hoops when using DOM APIs outside of a shadowRoot. It is against the nature of what makes us love polyfills and the awesomeness of Polymer.

For the sake of my sanity, I assume this is well understood to be a temporary fix to what must be a daunting polyfill.

@d4tocchini
Copy link
Author

@bergie, here be the dragons

@d4tocchini
Copy link
Author

side effects may include:

document.body != document.getElementsByTagName('body')[0]

@sjmiles
Copy link
Contributor

sjmiles commented Nov 14, 2013

Unfortunately, this is a bit like saying "I really like this 'car' thing, but it's got all this engine stuff inside!"

I completely agree that the polyfill is maximally intrusive, but this is a requirement of the problem it is solving. The fact that it works as well as it does in most cases is because of the extreme skill of the author.

Fyi, because the browser will not let us override document or document.body, then

document.body != document.getElementsByTagName('body')[0]

but

wrap(document.body) === document.getElementsByTagName('body')[0]

To compound matters, ShadowDOMPolyfill.unwrap(document) throws Error: Assertion failed.

You cannot unwrap document because it's not wrapped, as above.

How does one find the real document.getElementsByTagName?

One does not find it, because it will not work properly when employing ShadowDOM.

On Wed, Nov 13, 2013 at 9:23 PM, Dan Tocchini notifications@github.comwrote:

additional side effects may include:

document.body != document.getElementsByTagName('body')[0]


Reply to this email directly or view it on GitHubhttps://github.com//issues/346#issuecomment-28460470
.

@d4tocchini
Copy link
Author

Is this intrusiveness on the roadmap to be rectified?

You can't possibly be telling me that an accepted requirement of a polyfill is rampantly breaking well-standardized functionality outside the scope of the polyfilled feature, and even when the polyfilled feature is not in use... That irreversibly replacing native methods for static ones that are orders of magnitude slower is an accepted requirement?

I can imagine this is a supremely difficult polyfill, it's exactly the fact that Polymer is based on polyfilling standards-based power-features in browsers that makes Polymer so much more attractive than the other frameworks.

@sjmiles
Copy link
Contributor

sjmiles commented Nov 14, 2013

The intrusiveness is inherent to the solution, it can't be rectified
without a new means of simulating Shadow DOM. We certainly welcome
alternate architectures if such are offered, but be advised that it's an
extremely hard problem.

Various particular performance concerns, patterns, or anti-patterns have
been optimized in this library over time, and general optimizations have
also been performed at regular intervals. We'll keep improving it over it's
life span.

Thanks you for pointing out that implementing 'getElementsByTagName' in
terms of 'querySelectorAll' is a particularly bad idea. I believe this is
an actionable problem (https://github.com/Polymer/ShadowDOM/issues/312).

Ultimately, for high performance applications, it will be hard to beat
native Shadow DOM implementations. Hopfully user adoption will help
motivate browser vendors to implement this open standard.

On Wed, Nov 13, 2013 at 10:54 PM, Dan Tocchini notifications@github.comwrote:

Is this intrusiveness on the roadmap to be rectified?

You can't possibly be telling me that an accepted requirement of a
polyfill is rampantly breaking well-standardized functionality outside the
scope of the polyfilled feature, and even when the polyfilled feature is
not in use... That taking away irreversibly replacing native methods for
static ones that are orders of magnitude slower is an accepted requirement?

I can imagine this is a supremely difficult polyfill, but it's exactly the
fact that Polymer is based on polyfilling standards-based power-features in
browsers that makes Polymer so much more attractive than the other
frameworks.


Reply to this email directly or view it on GitHubhttps://github.com//issues/346#issuecomment-28463238
.

@arv
Copy link
Contributor

arv commented Nov 14, 2013

Let me first say that I agree with @d4tocchini. The issue is that we have constraints that force us into these kind of incorrect implementations and leaky abstractions. There are a lot of limitations that have forced our hand. I'll outline a few.

  • Live NodeLists. Live node lists require ES6 Proxy to be implemented efficiently. Without Proxies we would need to update all NodeLists on potentially every DOM mutation. For example, if someone has a class name based live node list (document.getElementsByClassName) and we change the class attribute of an element we would have to update all of these at that point. In a browser a live node lists make all dom mutations slower but only to the extent that it invalidates all of them which is a lot cheaper than recalculating all node lists. To make matter worse ES does not have a WeakRef (we are trying to get this into ES-post-6) so there is no way to know when these lists are no longer needed.
  • document.body !== documen.querySelector('body'). In an earlier prototype of the shadow dom polyfill we kept the object identity. However, this only worked in IE9 and Firefox. Only these 2 browser correctly implement WebIDL attributes as accessors. This earlier prototype overrode the accessors but kept the instances the same. A much simpler and better solution but Safari and Chrome were deemed too important.
  • document.body !== documen.querySelector('body') again. Safari incorrectly marks a lot of WebIDL attributes as non configurable properties so we cannot ensure we return a wrapper where needed.

The approach we are taking is that we have implemented what is needed to get things working. Yes, sometimes this requires user code to be changed when the web platform is not sufficient to express itself.

The good thing is that ShadowDOM is being standardized and implemented in browsers as we speak. If we are successful all the hard code we have created for these polyfills can be deleted.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants