-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add support for rendering in world cameras to textures #4126
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I like that the video-texture-source
component is simple. It just renders its camera to a texture whenever its flag is set.
Something about using indexToEntityMap[src]
to find the video-texture-source
component's entity
feels slightly suspicious to me, but I didn't have a good suggestion for how to do it differently. I thought about suggesting putting unique entity ID's into the src
(similar to what we do with networked
objects), but that has its own problems. I think I like that ID's are serializable but so is the src
property that you're using to identify the target
, so maybe I'm just wrong to be suspicious.
const texture = this.renderTarget.texture; | ||
texture.matrixAutoUpdate = false; | ||
texture.matrix.scale(1, -1); | ||
texture.matrix.translate(0, 1); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why the scale
and translate
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah yeah I should add a comment here... In short, to flip the Y.
Longer answer:
When sampling textures in OpenGL, 0,0 is the bottom left corner of the texture whereas GLTF has its UV space such that 0,0 is the top left corner of the texture https://github.com/KhronosGroup/glTF/tree/master/specification/2.0#images. Why don't we have problems with our other images than? Because in WebGL texImage2D
uploads textures from HTMLMediaElement
s starting at the top left https://www.khronos.org/registry/webgl/specs/latest/1.0/#TEXIMAGE2D_HTML, essentially flipping them for us. Here however we are rendering directly to a texture and so we end up writing image data that starts at the bottom left, so things appear flipped.
The "correct" thing to do here is probably to actually get the texture data to be in the same format we expect form our other textures (either by changing our projection or flipping it after the fact in a second pass). This would ensure we don't run into any other issues using this texture data, but since the default ThreeJS shaders already have texture transforms, flipping it at sample time seems fine.
This PR builds on the
video-texture-target
component used for video avatars by adding a newvideo-texture-source
component. This component is to be placed on aCamera
entity. Thevideo-texture-target
can then specify that entity as itstarget
to pull "video" from that camera onto its texture. This allows creating things like "jumbotron" screens and mirrors. This should be considered "beta" for now and may change depending on what we find trying it out in some scenes.Blender exporter support for these new components is here: Hubs-Foundation/hubs-blender-exporter#28
Still a few TODOs in the code that can be dealt with later. Ideally we would also rework the camera-tool component/system to use this new component to do its actual rendering to remove any duplicate code.