Media Controller Rearchitecture #529
cjpillsbury
started this conversation in
Proposals
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Overview
As part of our Media Chrome v1.0 efforts, we are planning on reworking and decoupling some of the core architecture owned by and instantiated with Media Controller. We're doing this now to avoid a potential major refactor later down the road to unlock and/or simplify potential "blue sky"/future possible uses of Media Chrome. That said, there are still a lot of TBDs when it comes to where we want these pieces to settle before an official
media-chrome@1.0
release.What I'm hoping for with this discussion/outline is to get folks thinking about what we or users of Media Chrome might want now or in the future. The point isn't to necessarily implement all of these things, but rather to:
The initial steps for this effort have already begun in earnest in this PR
Media Controller, a story in three parts
You can think of our current core architecture for Media Chrome as being owned by a set of three inter-related and currently "coupled" parts of the codebase that can be used via an instance of the
<media-controller>
element. These are:1.
<media-container>
The
<media-container>
element is primarily responsible for a convenient reasonable layout of a "Player UI". It includes layering, well-defined slots, automatic showing/hiding for things like:In addition, it is, at least currently, responsible for some state management, particularly state that is more explicitly about the (non-media) UI, such state related to showing/hiding aspects of the UI based on user interactions.
2. Media Controller
The Media Controller itself can be thought of as two inter-related but distinct things, and the code is somewhat (though not entirely) written to reflect this:
2.a. State Management (JS)
While the bulk of this code still "lives in" the
media-controller
module, it's mostly broken up into into its own parts of the codebase. This is the part of the code that's responsible for, in particular:These are all JavaScript only. Currently, there are some assumptions on how this state is propagated and "wired up" that tie it to use with
HTMLElement
s (including but not limited to Web Components), but these mostly occur "around the edges".2.b.
<media-controller>
(HTML/Declarative)Even though this element currently is a subclass of
<media-container>
, it can be thought of and is sometimes used as as having its own responsibility, namely: It is a simple, declarative way via HTML of having access to the state management defined in 2.a., above. In other words, one can get all of the state management without having to write a lick of JS simply by adding<media-controller>
to an HTML page. Even if most of the state management logic moves to a standalone JavaScript entity, this feels like an independently valuable feature worth mentioning, particularly for DevEx/"ergonomics"/ease of use.What does it mean to decouple these three parts?
Minimally, if we want to be able to treat these three as distinct, we'll need to:
HTMLElement
s for data flow, such as: serializing state to strings (on the assumption that they'll be used to setHTMLElement
attributes), getting/setting attributes on "Media State Receiver"HTMLElement
s, and listening to DOM events for media state change requests.<media-controller>
to not be a subclass of<media-container>
<media-controller>
a non-visual, non-interactive element<media-container>
cannot have state management "built in" by default, only that it would have to handled through "composition" instead of via subclassing. In other words, this is a separate decision. (more on this, below)<media-container>
to look more like just another "Media State Receivers"<media-container>
is "shared" with the relevant 2.a. (State Management) instance (potentially but not necessarily simply using our "media state change request" infrastructure, where the value would be the media element itself).<media-container>
UI is made available somehow (either provided or "baked in to its class def"). (more on this, below)Why decouple these three parts?
The short answer here (at least in my opinion) is that we can make our code less "opinionated" or "presumptuous" on how it has to be used, and we can do so without too much work or making the "under the hood" code especially more complex. By removing these assumptions and deep coupling, we can reduce the amount of friction for at least some categories of use cases and feature additions, even ones that we might not currently be envisionioning. Below are just a few "blue skies" examples. More "blue sky" ideas from other contributors are encouraged so that we can have as many of those as possible available when moving forward with architecture decisions and implementation.
Example 1 - Extending the Media Controller to add logic for your use case:
Example 2 - Replacing the media controller with a different media domain such as realtime:
Here are a few (non-exhaustive) very broad categories of what can be done or done more easily by separating out the three pieces.
<media-container>
, but with their own State Management infrastructure.What still needs to be figured out?
Even if we move forward with splitting these three pieces into their own entities, there are still open questions in the details. At least some of these can also be approached with a "blue skies" approach. Here are a few:
Should folks still be able to get the "status quo" where they can drop a single element on the page and get all 3 parts. E.g. should
<media-container>
have state mgmt by default or with minimal additional code? Or should we assume the state management controller and container UI will be separate elements, at least for the time being? This discussion was begun with a slightly different framing here.What state actually belongs to the controller? Some state is very obviously media state (e.g. muted, paused). Others are slightly less obvious (e.g. fullscreen), more descriptive of the runtime environment (e.g. airplay availability), or exclusively about UI (e.g. user inactivity). Here's a rough Venn Diagram of state:
<media-container>
, as a UI element, will still need a slot for a media element for layout purposes. How should we model the relationships between the state management controller and the slotted media element in<media-container>
? What if someone doesn't want to use<media-container>
for UI layout?How/where should we handle/define state propagation for our official UI components? If we don't want to deeply assume "Media State Receivers" will be web components, we still need something to be responsible as the "glue"/translation layer for this aspect of Media Chrome state management.
Similar to the previous one, how/where should we handle/define media state requests? If we don't want to deeply assume DOM events, we still need something to be responsible as the "glue"/translation layer for this aspect of Media Chrome state management.
It's worth noting that some of these decisions (or others that are unlisted) don't need to be fully fleshed out before we do an official v1.0 Media Chrome release. That said, as mentioned at the top, we should try to avoid "coding ourselves into a corner" that would make some of these decisions "implicitly decided" prematurely.
What's next?
Ideally, folks will start coming up with other "blue sky" goals/possibilities for what the new rearchitecture could unlock. Similarly, if there are other open questions that should be on our radar sooner rather than later, we should try to identify those now.
Prior Thoughts/Prior Art
Over a year ago, I drafted this potential architecture diagram as a longer term goal. While we might not want to do everything here, it may be helpful as a way to get bearings on some of the moving pieces. If this is not helpful, feel free to ignore!
Beta Was this translation helpful? Give feedback.
All reactions