-
Notifications
You must be signed in to change notification settings - Fork 386
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add XREnvironmentBlendMode to communicate display transparency #366
Conversation
Curious: do we want to align this with w3c/csswg-drafts#2719 And, I know @kearwood and others in the past have expressed concern about knowing what the "this is transparent" color is ... seems related. |
Aligning verbiage with CSS values that communicate the same thing is a good idea, yes. As for concerns about what a specific transparent color is, I'm not aware of any existing displays that use a feature like that, but if we find one I think the appropriate response is likely to add a As with a lot of things about this API, the principle of "add only what we need now, but know how we'll extend it when needed." applies here. |
This proposal looks great! One thing we should think through as we close on #330 is whether the same world blend mode enum should be first used by apps during Based on the display technology on a given device, there are three UA support levels for a particular world blend mode:
Two things to note in the table:
|
explainer.md
Outdated
|
||
Some devices which support the WebXR Device API may use displays that are not fully opaque, or otherwise show the real world in some capacity. To determine how the display will blend rendered content with the real world, check the `XRSession`'s `worldBlendMode` attribute. It may currently be one of three values, and more may be added in the future if new display technology necessitates it: | ||
|
||
- `opaque`: The world is not visible at all through this display. Transparent pixels in the `baseLayer` will appear black. This is the expected mode for most VR headsets. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This should be explicit around how an opaque
world blend mode treats the alpha channel: explicit ignoring it and treating all pixels as if alpha was 1.0.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This doesn't sound unreasonable, and I'm going to change the PR text to reflect it because specific is better than non-specific in this case. I would like to verify the behavior of other systems, though. If Daydream, for example. treated this as an alpha-blend against black we'd have to clear the alpha channel to 1.0 prior to compositing in order to comply with this. That's a cost I'd rather not incur if we have a choice.
explainer.md
Outdated
Some devices which support the WebXR Device API may use displays that are not fully opaque, or otherwise show the real world in some capacity. To determine how the display will blend rendered content with the real world, check the `XRSession`'s `worldBlendMode` attribute. It may currently be one of three values, and more may be added in the future if new display technology necessitates it: | ||
|
||
- `opaque`: The world is not visible at all through this display. Transparent pixels in the `baseLayer` will appear black. This is the expected mode for most VR headsets. | ||
- `additive`: The world is visible through the display and pixels in the `baseLayer` will be shown additively against it. Black pixels will appear fully transparent. Alpha values will modulate the intensity of the colors but cannot make a pixel opaque. This is the expected mode for devices like HoloLens or Magic Leap. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For native apps on HoloLens, alpha values are ignored in the headset display. Can you elaborate on how a UA might use alpha values to modulate the intensity of colors?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I actually thought this WAS how HoloLens worked, with the colors coming through as sort of pre-multiplied by the alpha channel. I'll correct the PR.
explainer.md
Outdated
|
||
- `opaque`: The world is not visible at all through this display. Transparent pixels in the `baseLayer` will appear black. This is the expected mode for most VR headsets. | ||
- `additive`: The world is visible through the display and pixels in the `baseLayer` will be shown additively against it. Black pixels will appear fully transparent. Alpha values will modulate the intensity of the colors but cannot make a pixel opaque. This is the expected mode for devices like HoloLens or Magic Leap. | ||
- `alpha-blend`: The world is visible through the display and pixels in the `baseLayer` will be blended with it according to the alpha value of the pixel. Pixels with an alpha value of 1.0 will be fully opaque and pixels with an alpha value of 0.0 will be fully transparent. This is the expected mode for devices which use passthrough video to show the world such as ARCore or ARKit enabled phones. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You should mention video passthrough headsets like Vive Pro as well, to keep that scenario in the reader's mind.
I really like this. I would also like to consider if we can define these blend modes more generally, in other words decoupling these blend modes from physical reality, and allowing them to be defined as the blend mode used by the UA to composite a layer against the UA’s chosen underlying visual representation of reality (optical see-thru, video see-thru, virtual environment, etc). Basically I’m arguing we don’t assume the underlying representation of reality (that is relevant to the user experience) is always “physical reality”. For example, the HoloLens is perfectly capable of rendering complete virtual environments such as tours of remote locations (and obviously these are displayed “additive” against physical reality), however if we consider how a UA might allow an XRLayer to be composited on top of such a virtual environment (assuming the UA wants to use that virtual environment as the underlying represention of reality), then the blend mode for that augmentation layer should (probably?) be “alpha-blended” against that virtual environment, even though the final composite is “additive” against physical reality. Along this same vein of thought, I suggest the name |
I don't object to the name I do like the distinction that the thing it's blending against is somewhat arbitrary. I don't think any platforms enable this kind of functionality currently, but I can definitely see a future where you can run "AR" apps in something like the WinMR Cliff House. |
Working off the assumption that the "👍2" on the previous comment was due to a general approval of my naming suggestion, I've update the PR to now use the term "environment" rather than "world" (ie: |
Seems like this PR has general support, with the primary additions being discussed (Alex's suggestion of requesting a specific blending mode) being something that can be layered on afterwards. As such I'd like to merge this soon, especially given that I won't be available for the next VR-centric call and would prefer to not have this outstanding for another 3 weeks. Anyone still have unvoiced strong opinions about this feature or the naming? |
An issue there on an additive device is what happens if a layer is partially on top of another layer's virtual pixels and partially on top of optical reality. The UA can use any blend mode it likes when composing multiple virtual layers, but is limited by the capabilities of the display technology when optically composing over reality. In that regard, I see this per-session attribute as reading out how the final flattened layer stack will ultimately behave given the nature of the display. When we later support multiple layers, we can then additionally give each layer its own blend mode, which separately lets apps decide how that layer should be flattened. The per-layer and per-session blend modes would combine to describe how composition takes place for that session. If an additive device's UA or platform has a virtual underlay of some sort, some app pixels will directly land on clear display pixels and some will land on the underlay's pixels - in that case, the app's content still generally expects additive blending, and so it would seem least surprising to just consistently perform additive blending for all pixels, as if the underlay's pixels were part of the optical reality. |
+1. That's exactly how I see it playing out as well. Given the lack of dissent I'm going to merge this now. |
Addresses #145 (finally).
We've talked in the past about an
isTransparent
boolean or aopaqueness
scalar, but after looking at the various scenarios that we want to account for on the market today it really seems what developers will actually want to know if how their content will be "blended" with the real world. As far as I can see the answers are "not at all" for VR, "additively" for HoloLens and friends, or the extremely catchy "glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA)" for ARCore/ARKit style passthrough rendering. Thus this PR attempts to communicate those states with more reasonable names.By making this an enum we can extend it in the future, but I'm not sure how many other types of display we're likely to see. I guess I've heard people theorize about one where there's a sharp cutoff point between fully opaque and fully transparent pixels, but I'm not aware of any such display being available.