-
Notifications
You must be signed in to change notification settings - Fork 56
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Declarative Shadow DOM #494
Comments
Thanks for the thorough explainer and links out to previous discussion! I've started working my way through them, but I haven't really seen the full use case spelled out: that is, can you explain or point me to a part of the explainer or previous discussions which explain what the need is for server-side rendering/no-JS solutions for Shadow DOM? For example, the explainer mentions search engine indexing, but I don't have a good mental model of a situation in which content in shadow DOM would be relevant for indexing. Similarly, for users running with no JS, under what circumstances would providing the content within Shadow DOM (as opposed to slotted) make a critical difference to the experience? Glancing through the previous discussions, it seems like there is strong developer demand for this, but I gather there is context which isn't included in those discussions, which it seems like I've missed. |
Thanks for kicking off this review! So your question seems to be why Shadow DOM is needed in SSR/no-JS content? I.e. there is a section of the explainer that details why people want No-JS, but you're asking why they need Shadow DOM with that? If so, I think the main reason would be that for people using Web Components, they have custom elements that use Shadow DOM. Those components assume the style encapsulation of Shadow DOM, and have stylesheets built accordingly. If those sites are to be SSR'd, and if there is no way to stream out the composed page including the Shadow DOM, then the SSR library would have to do significant work to essentially re-write the entire page in terms of just light-DOM. On the other hand, if a declarative Shadow DOM primitive existed, a simple call to getInnerHTML() would retrieve the entire SSR content that should be streamed to the client. That would be less work on the server, and also on the client, since that composed page could be re-used as-is once the components are hydrated. No need to wipe out the declarative content and re-build it. The above paragraph is addressed to the SSR and no-JS-user cases, because the goal there is to deliver pixel-correct output via declarative content. The search engine indexing case is a bit different, and as you point out, the "interesting" content is likely slotted in from the light-DOM. But in the discussions I've had with developers, they seem concerned that there is a risk of not properly representing the content to crawlers if the entire shadow DOM is missing. LMK if that answers your questions! |
Thanks for the quick response. It seems like the underlying problems that this API is looking to solve (primarily by allowing server-side rendering to occur) are:
Is that right? |
This has been quite desirable before, especially for e-commerce - that was mostly due to the Google and Bing crawler using very old browser engines (I think Google Search used to use M42 until a year or so ago), as well as being quite slow at indexing with JavaScript enabled. Like it might index immediately without JS but then it could take a week or so before it got reindexed, which has been a no-go for a lot of companies. I personally know companies who have disregarded Web Components due to not supporting SSR as it would result in very poor SEO and kill their businesses. On the other hand, it seems that both G Search and Bing are using modern engines now that are being updated on a good cadence. I am still not sure if you can rely on things being indexed quickly with JS enabled. Maybe Martin knows better (@AVGP) |
Yes, that is right. I think there are two other compelling use cases for this feature, in addition to those two:
I have also heard this objection/concern from developers several times. Even if they want to use Web Components, they can't, because SSR is seen as a hard requirement. |
Thanks so much for the additional detail, and I promise we won't be talking about use cases forever, but I just have a few more clarifications to ask for:
Can you give an example of why this is needed? e.g. if you're using a custom element, doesn't the custom element know how to re-create its own Shadow DOM, meaning the light DOM is sufficient?
This makes me somewhat uneasy. My understanding isn't that style scoping isn't a "feature" of Shadow DOM that you can choose to use in isolation, but rather a part of the encapsulation guarantees that Shadow DOM provide, in order to ensure that components using Shadow DOM can be safely re-used in any context. This encapsulation has consequences for things like:
Do you know to what extent Shadow DOM is being used in this way? Have I misunderstood?
To what extent is this based on a (possibly outdated) perception of how search engines work, vs a requirement for rapid rendering on the client? Sorry for all the meta-discussion, but I'd really like to fully understand the context for this feature before trying to form a solid opinion on the design. |
Two reasons, I think:
It's a good question. For right now, there is no other style scoping mechanism provided by the web platform. If you want style scoping, you either roll it yourself, or you use Shadow DOM. I agree that Shadow DOM includes much more than just style encapsulation, and as you mentioned, that has other consequences. But for now, it's all we have. There is a project underway to propose and spec a light-DOM style scoping mechanism, but that is likely a ways off. We haven't even published that proposal yet. In the meantime, Shadow DOM is the style scoping mechanism we have. I would argue that the consequences you list above are ones that we should try to fix anyway. They represent shortcomings of tooling w.r.t. Shadow DOM in general. No?
It is possible that part of this requirement is due to out-dated perceptions of search engines. However:
Primarily for reason #2 above, I can completely understand the ongoing hard requirement to use SSR. Even knowing what I know, if I was building an external site, I would likely reach the same conclusion. |
Thanks for the continued responses! We discussed this a little in the breakout today, and will pick it up again at the "face to face" in a couple of weeks. I think this probably gives us enough context to read through the proposal in depth. |
That sounds great, thanks! |
@alice and I took at look at this during the TAG F2F this week, and we noticed a few things. Here's the first: I'm surprised and concerned that this enables the serialization of closed shadow trees by code outside the shadow tree. Doesn't that defeat the purpose of having closed shadow trees in the first place? I suppose you might want such a feature for a browser's "save this page" feature, or maybe for server-side code, but perhaps that would be better served by an API that's not available to page authors. |
Thanks for the feedback! Responses below.
Great news, thanks.
Yes, I was curious about this as well, since I only heard it anecdotally while developing this feature. I will make it a point of any origin trials to try to gather more data here.
Good suggestion! Done. It looks similar, but I've moved the two sections ("Closed shadow roots" and "Additional arguments for attachShadow") out of the "Other Considerations" section up into the "Serialization" section proper.
Sounds good, thanks.
I agree here - streaming would be preferrable. The problem is (I think) that the implementation cost is seen (e.g. here) as being due to security risks and bugs. I think the difference in "pure" implementation cost is actually quite similar for the opening vs. closing tag. It is just that having a live shadow root document into which elements are parsed is new, and opens the usual Web can of worms of possibilities for what can happen. I do empathize with that concern. This proposal (non-streaming) gets the syntax, the use case, and some data up and running, and from there we can hopefully expand to a streaming version later.
I agree that if we add a streaming option (e.g.
I would love to do this, but as I mentioned above, I think the devil (and implementation cost) is in the details. As far as I understand it (and I'm open to pointers here!), we can either create the shadow root upon the opening or closing tag. Full stop. The current proposal uses the closing tag, because it can then re-use the (debugged, working) machinery for Let me know if you have suggestions here - I'd love to get a streaming version implemented that will get multi-implementer support. As-is, I still have no indications of support from the other engines, and that is without adding more complexity. |
@mfreed7 this is very exciting to see coming together! Two things come to mind in reviewing the explainer that I'd love your thoughts on... Would you be open to comparing an additional baseline or maybe a parallel set of baselines? In particular, I experience using What thoughs have you follow up on relative to the complexity ceiling of this approach? For instance the relatively benign Thanks in advance, looking forward to future possibilities here! |
@mfreed We accept the additional implementation cost of building streaming support, are while we would prefer it to be streaming form day one, we understand that the initial implementation will likely not support it, or even that future implementations may have difficulty implementing streaming at all. What we'd like to see more consideration of, is can this be designed such that streaming is presumed, and implementations can opt-out? The goal being that a user can write their code once, presuming streaming is available, and get the benefits of a streaming implementation should it be available, but not break should the browser not support streaming. This would be more friendly to progressive enhancement without authors having to develop both streaming and non-streaming versions of their code. As an example (pure strawman), could there be an async api for fetching the shadowRoot, such that a streaming implementation returns it on the opening tag, and further content may come in later (perhaps another async method to return when a given shadowRoot is fully loaded), and that non-streaming implementations simply don't return the shadowRoot until the close tag (and they always return a fully loaded shadowRoot, or perhaps return an empty shadowRoot and then immediately fill it so all code reacts like streaming is happening). |
Thanks for the suggestion - this is a great idea! If done right, this would allow implementations that do not wish to shoulder the additional burden of supporting streaming not to do so, while still supporting the more basic non-streaming version, and therefore the feature in general.
So right now this proposal does not specify any events for shadowRoot attachment, even in the non-streaming case. We've been trying to keep that problem (the "the parser finished / children changed problem") separate from this one, just to avoid hanging this proposal up on finding a solution to that problem. However, I definitely think we should tackle that problem in a general way. Would you be ok if we just made a change here something like this: <template shadowroot="open" do-not-stream> The With such a definition, and even absent the events being defined, I think this is a very usable API. Custom elements can co-exist with SSR Shadow DOM by ensuring that the SSR content comes before any async script tags. Later, when issue 809 is solved, the implementations can get more creative and detect the completion (or start) of shadow root parsing. What do you think? |
Looking at this with @hober, I was trying to understand what was the major fear that people have with not supporting streaming. It seems to be that there is fear that web sites that are 100% bought into web components in the sense that even their app root element is a custom element, might loose incremental rendering. As I understand you already have an experimental implementation, so I am wondering if that is really the case? I also believe it would be great to have this discussion as part of the explainer. |
Thanks for the comment! Developers want streaming (e.g. comment), to improve FCP and similar metrics. As you mentioned, the extreme case of that is a Having said that, I think the streaming ship has sailed, as there is strong pushback by WebKit on streaming support. And Gecko has been mostly quiet about the feature, except to say they wouldn't support a "streaming-optional" implementation. So in order to gain 2-implementer support, I'm punting the streaming feature.
Chromium does have an experimental implementation, but we don't have data on this aspect yet. It might be difficult to gather such data, as the feature doesn't support streaming. There is a brief discussion of this aspect of the design in the explainer, here. |
Looking through the linked discussion, it's not clear to me that WebKit is pushing back on streaming, so much as having streaming be optional to implement; it looks like Mozilla is making similar arguments, as you summarised. Since we only really had feedback on the streaming feature, and that discussion seems to be ongoing among the relevant stakeholders, it seems like we can probably close up this review, unless you had any other questions for us to think about. |
You're right that in the current discussion, the streaming-related pushback has been on optional streaming, which is why I punted on that option. But the prior decision in 2018, to not to move forward with declarative Shadow DOM, was mostly predicated on the difficulty and security issues inherent in a streaming solution. I wrote up a summary of that discussion in the explainer, here. It was this specific prior discussion that motivated me to pursue the non-streaming solution when I revived declarative Shadow DOM in 2020, in the hopes of getting multi-implementer support.
Only one other issue has recently come up: a potential sanitizer bypass using declarative Shadow DOM. I have written up a summary of the issue and added it to the explainer. (I've also posted about this in the issue discussion, and reached out to sanitizer libraries.) I believe this, like other sanitizer bypasses, is best handled by the sanitizer libraries themselves, which already need to issue frequent updates to keep on top of security issues. But if you have any input on ways to mitigate this issue from an API perspective, I'd be very interested to hear your input. From my perspective, the issue seems fairly fundamental to any declarative Shadow DOM solution that allows closed shadow roots, mixed with any sanitizer library that allows the return of live DOM instead of string HTML. But thoughts appreciated! If there's no input on the above issue, I do think we can close this TAG review. I really appreciate all of the feedback and help here! |
I think you've already identified the key mitigation—to disable Declarative Shadow DOM from the fragment parser.
Yup.
Okay, will do.
Thanks for all your hard work on this & other features! |
Thanks very much! |
Hello TAG!
I'm requesting a TAG review of Declarative Shadow DOM.
A declarative API to allow the creation of
#shadowroot
s using only HTML and no Javascript. This API allows Web Components that use Shadow DOM to also make use of Server-Side Rendering (SSR), to get rendered content onscreen quickly without requiring Javascript for shadow root attachment and population.Further details:
We'd prefer the TAG provide feedback as:
☂️ open a single issue in our GitHub repo for the entire review
The text was updated successfully, but these errors were encountered: