-
-
Notifications
You must be signed in to change notification settings - Fork 35.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Added support for light probes #16223
Conversation
As implemented,
|
Looking good! 😍 I see the current API looks like this: lightProbe = new THREE.LightProbe( undefined, API.lightProbeIntensity );
lightProbe.setFromCubeTexture( cubeTexture );
scene.add( lightProbe ); What do you think about light = new THREE.SphericalHarmonicsLight( API.lightProbeIntensity );
light.setFromCubeTexture( cubeTexture );
scene.add( light ); How common do you think would be to be able to pass custom In the future, maybe the browser will supply SH with the estimated light when doing AR, but in that case I can see this pseudo-code being a common usage: light = new THREE.SphericalHarmonicsLight();
light.setFromArray( WebXR.getLightEstimation() );
scene.add( light ); |
I would expect the coefficients would be, saved, and reused. That is why I am not concerned that the baking method |
Personally, I prefer the name
|
/ping @bhouston, @richardmonette, @donmccurdy |
I think all LightProbes should use SH for diffuse (because it is more efficient that low res cube maps for diffuse) and it is optionally to use a small cube map for specular. Maybe it could be implemented in a hierarchy? LightProbe - supports SH only. The goal should be to remove any per-material cube maps for lighting or reflections (they are a pain to manage anyhow) and instead replace them with scene elements based on LightProbes that are applied globally. I think that generally Light Probes could be included in the scene graph. And then there is a component that processes them, creates their reflection maps + SH, and then creates the tetrahedral grid, and then determines their intensities. I think it could be called WebGLLightProbes or something if we follow our current pattern. LightProbes could have a .needsUpdate on them and if that is set the tetrahedral mapping will recalcuate and it will also recalcuate its SH + reflection map. I would suggest that it is fully handled within WebGLRenderer and its subclasses -- this way it would be as easy to use as traditional Three.JS lights. Maybe it can be enabled/disabled on WebGLRenderer globally so that it doesn't search all the time for light probes to see if it should do something or not. It is an advanced feature people can turn on. I wonder if a SpecularLightProbe could also be used for semi-accurate refraction mapping through a roughness surface (via the mipmap structure.) I think it could. We should plan for that. |
@mrdoob, LightProbes are fundamentally different than lights. It is because they are a measurement of the light passing through a particular spot, not an emitter of light. Thus I would not call them a "Light", but I would strongly suggest calling them a "LightProbe". This is the term that Unity 3D uses: https://docs.unity3d.com/Manual/LightProbes.html There are generally just two types of light probes: Diffuse, represented by SH, and Specular, represented by a low res cube map. One generally can not interpolate well Specular cube maps -- as it leads to double/tripling imaging, thus often you just pick the closest light probe for the specular reflection (I think, but maybe someone else should confirm), but you interpolate between the 4 closest light probe SHes for the diffuse component. |
@bhouston said:
On the JS side, representing
If I am not mistaken, we can use the term |
I'd agree that
That hadn't occurred to me, but would be excellent if possible.
One complication is that light probes are not usually applied to static objects, as far as I know. Terrain will not receive meaningful lighting information from any single tetrahedron in the light probe volume. Selective lighting has been discussed before (#5180), or making light probe contributions optional at the material level would be sufficient. EDIT: I think I prefer the latter. |
You are correct that light probes for GI is not generally used on static objects. Normally the static parts of the scene will have baked light maps on them (which captures the incoming diffuse global illumination arriving on those surfaces at a low resolution which is incorporated into the object as a indirect diffuse contribution in our model.) Thus one should basically category objects into dynamic objects that are lit via the LightProbes and the non-dynamic one that should have their GI baked into their light maps. I think it really is the objects themselves that are dynamic rather than their materials. Or at least it would be easier to manage that way. At some point we should probably have a GI light baker as well (e.g. https://github.com/mem1b/lightbaking ) that uses light maps that are auto unwrapped like other game engines (#14053 ). |
@WestLangley wrote:
Could we put all the SHs for all light probes in the scene into a single data texture, all 1000 or so, and then we just have indices and weights passed into each material shader that needs them. But if we are going to do a lot of cube maps we should also pack those into just a few large texture maps. Otherwise we will run into texture slot issues on low end mobile devices. |
You are right. In Unreal Engine you place reflection probes completely separately than light probes (which are automatic I understand): https://docs.unrealengine.com/en-US/Engine/Rendering/LightingAndShadows/ReflectionEnvironment Unity 3D's reflection probe: https://docs.unity3d.com/Manual/class-ReflectionProbe.html So apparently we need both. |
I am suggesting baby steps. In the long term, we can have Currently, with this PR, we have Instead, we could have As a next step, we can add an example where After that, we can add reflection probes. That requires parallax correction, since the probes are not centered on the mesh. |
I like the plan @WestLangley . Baby steps is the only way to get stuff done. |
Well, it is weird if all meshes that share a material must share the same light probe. It really should be each mesh gets its own light probes on its materials based on that mesh's position in the scene. |
@bhouston What do you think of this approach for now? We would later simultaneously remove envMap and lightProbe from material. Just a thought... |
Thanks for the explanations guys! I was a bit confused because currently As per the constructor suggestion, I was basically trying to fix the current |
@mrdoob A light probe is a probe of ambient light. In our model, we rightly assume that a mesh located at a probe location will receive the same amount of ambient (indirect) light as the probe does.
We use the nomenclature "light" for We could rename them to Or, we can keep the As far as the inheritance goes, I am open to any suggestions. |
@mrdoob wrote
I understand. Currently, however, this pattern does work: var hemiLightProbeSH = new THREE.SphericalHarmonics3().set( [
new THREE.Vector3( 0.5908, 0.5908, 0.5908 ),
new THREE.Vector3( 0.8506, 0.8506, 0.8506 ),
new THREE.Vector3( 0, 0, 0 ),
new THREE.Vector3( 0, 0, 0 ),
new THREE.Vector3( 0, 0, 0 ),
new THREE.Vector3( 0, 0, 0 ),
new THREE.Vector3( -0.3642, -0.3642, -0.3642 ),
new THREE.Vector3( 0, 0, 0 ),
new THREE.Vector3( -0.6308, -0.6308, -0.6308 )
] );
var lightProbe = new THREE.LightProbe( hemiLightProbeSH, intensity ); I am open to any suggestions, of course. |
Alternatively, we could do something like this: var lightProbe = new THREE.LightProbe();
lightProbe.set( hemiLightProbeSH, intensity );
lightProbe.setFromCubeTexture( cubeTexture, intensity );
Currently, the If color is ignored, |
What do you think about copying function LightProbe( color, intensity ) {
Light.call( this, color, intensity );
this.sh = new SphericalHarmonics3();
this.type = 'LightProbe';
} |
41dcd36
to
072f025
Compare
Done! We will have to decide if we want |
Sweet! Thanks! |
If we want that it should be a function on SphericalHarmonics class: sh.multiplyByColor( red ); I really like @mrdoob's recommendation is that LightProbe is derived from Object3D. AmbientLightProbe, HemisphereLightProbe could both be implemented as DiffuseLightProbe that takes an SH. Then we actually get rid of code and special cases in the renderer. And then we also get SpecularLightProbe in the future. I would have a separate light probe workflow in the WebGLRenderer that is separate from lights and it is all SH-based for diffuse interactions (DiffuseLightProbe) and cubemap-based for speculation interactions (SpecularLightProbe.) The class hierarchy would look something like this:
@Usnul and @WestLangley I would just stick with SH inside of the diffuse light probe. It is simple and effective. I think trying to support both cube maps and SH will complexify the code. If we ever find that texture lookups are much faster, we can just convert the SH's into data textures where all SH's are stored in a single texture for a level. (Completely off topic, probably at some point we should redo SphereicalHarmonics to have an Float32Array inside like Matrix4. But that will only become important once we have a lot of them.) |
Thanks everyone for your feedback. I agree it is best to stick with SH-only for representing irradiance. There are many issues I am working on to get the modeling correct regarding light probes. I am also listening to the opinions of others to get consensus regarding the three.js light probe workflow, and in fact, a consensus as to what a three.js light probe represents. Please, let's avoid refactoring the three.js class structures at this point. If someone is interested in creating a CPU-based tessellation example, that can be done in parallel, and would be of interest, I expect. |
I'd be willing to try implementing CPU-based Delaunay tetrahedralization and probe interpolation. Let me see if I can make some progress this weekend. |
Why does color and intensity apply? I feel that they do not apply to the
sh of diffuse light probe nor to the cubemaps of the reflection light
probes.
But you are right that light probes really only need position and not scale
nor rotation.
…On Tue, Apr 16, 2019, 9:38 PM WestLangley ***@***.***> wrote:
@mrdoob <https://github.com/mrdoob> @bhouston
<https://github.com/bhouston> Regarding having LightProbe extend Object3D
...
The only Object3D property that applies is .position, I think. On the
other hand, Light.intensity and Light.color do apply.
So, why extend Object3D? Does LightProbe have to extend Object3D?
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#16223 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AAj6_YD-UNDbfwUGVIeb7y5Ku7IjPUHqks5vhnsLgaJpZM4crIaj>
.
|
I'm not sure what we'd lose by removing |
It doesn't have to. It just tints. I agree, it can be removed.
Light probe coefficients can be computed from a cubeMap. A cubeMap is unit-less. A cubeMap has an associated intensity, radiance. The probe needs an intensity parameter, irradiance. The probe coefficients are unit-less. |
Can Certainly, the shader needs SH coefficients. So we need |
I think light probe sh should have units.
The cube maps should have units as well but it may need an intensity/scale
factor if loaded externally.
…On Tue, Apr 16, 2019, 10:14 PM WestLangley ***@***.***> wrote:
Why does color apply?
It doesn't have to. It just tints. I agree, it can be removed.
Why does intensity apply?
Light probe coefficients can be computed from a cubeMap. A cubeMap is
unit-less. A cubeMap has an associated intensity, radiance.
The probe needs an intensity parameter, irradiance. The probe coefficients
are unit-less.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#16223 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AAj6_VGPC9nHbRk5AcLgcnvMbkvFk8j5ks5vhoNwgaJpZM4crIaj>
.
|
I agree that Extending |
I can't help but feel that Object3D turns slowly into a liability. Provided we have a performance implementation of light probes - many users will want to use a lot of them. A light probe by itself, if implemented as an SH with 9 floats needs following space: that's 48 bytes per probe. Sure, this is a gross simplification, since there's a lot of overhead associated with a JS object under the hood. But last I checked, Object3D has a lot more Lines 22 to 99 in 15a31bc
by itself it has 24 properties and a bunch of owned objects. lets have a look if they are applicable to a LightProbe
|
I think that we do not have to derived from Object3D.
Deriving from Object3D may suggest that you can move light probes
around easily, but really if you do, you need to recalculate the
tetrahedralization which is going to be at least medium costly.
Thus just having a set of LightProbes as a separate structure may be
okay. For efficiency, it really could just be a set of SH and
positions for the diffuse contributions. And then a set of cubemaps
and positions for the specular contributions -- which would then have
to be PMREM'ed and packed into a single large texture I figure.
This may be too much of an optimization though. I think that
reflection light probes may actually move around with characters,
especially hero characters. And maybe after the initial static light
probe implementation is done, we move on to the dynamic light probe
method that achieves dynamic real-time GI that Enlighten makes.
But on the other side, people like to position light probes in the
scene editor. This is really how things are generally done. That
advocates that at least during design LightProbes are Object3D's. And
it is only at after they are computed and the LightProbes are pruned
from the Object3D tree into an efficient run-time format?
-ben
…On Wed, Apr 17, 2019 at 7:59 AM Alex Goldring ***@***.***> wrote:
I can't help but feel that Object3D turns slowly into a liability. Provided we have a performance implementation of light probes - many users will want to use a lot of them. A light probe by itself, if implemented as an SH with 9 floats needs following space:
coefficients: 9 * 4 bytes
position: 3 * (4 for Float32 or 8 for Float64)
that's 48 bytes per probe. Sure, this is a gross simplification, since there's a lot of overhead associated with a JS object under the hood. But last I checked, Object3D has a lot more
https://github.com/mrdoob/three.js/blob/15a31bc229294d2030a6a0a602cf5a886dda259f/src/core/Object3D.js#L22-L99
by itself it has 24 properties and a bunch of owned objects.
lets have a look if they are applicable to a LightProbe
property type applicability
id number ✔ok
uuid string ➖ok
name string ➖"my favorite probe"? - ok
type string ?
parent Object3D ➖ok, if we want to support grouping
children Object3D[] ❌nope
up Vector3 ❌nope
position Vector3 ✔yep
rotation Euler ❌hard nope
quaternion Quaternion ❌same as rotation
scale Vector3 ❌nope
modelViewMatrix Matrix4 ➖ok?
normalMatrix Matrix3 ➖?
matrix Matrix4 ➖ok?
matrixWorld Matrix4 ➖ok?
matrixAutoUpdate boolean ➖ok?
matrixWorldNeedsUpdate boolean ➖ok?
layers Layers ➖ok
visible boolean ❌hard nope
castShadow boolean ❌nope
receiveShadow boolean ❌nope
frustumCulled boolean ✔ok
renderOrder boolean n❌ope
userData object ➖ok
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub, or mute the thread.
--
Ben Houston, CTO
M: +1-613-762-4113
bhouston@threekit.com
Ottawa, Canada
ThreeKit Visualization Platform: 3D, 2D, AR, VR
|
|
Maybe the process of creating the optimized "DiffuseProbes" structure
includes the creation of the tetrahedralization as well? And it is only a
"DiffuseProbes" structure that can be used to light a scene. The
individual LightProbes that one can place in the scene are just design
helpers that are completely ignored at run time, but these can (but maybe
other sources of data could be used instead) be feed by the light probe
compiler to create the "DiffuseProbes "?
Thus we will have a distinct compilation stage to "DiffuseProbes", it will
be an explicit compilation stage rather than an automated one (thus much
easier to implement and you won't have surprises at runtime with it
recalculating), and it is designed for maximum run-time efficiency. We can
likely make a binary representation for read/write pretty easily as well.
DiffuseProbes can be something that is set on the scene itself. Rather
than Object3D structure.
```
var probePositions = /* get positions of LightProbe nodes from scene */
scene.diffuseProbes = DiffuseProbes.Compile( probePositions, scene,
renderer ) // pass in renderer and scene so that it can create the probes
via rendering
```
Some possible name ideas from short to long:
```
DiffuseProbes (my favorite)
DiffuseProbeSet
DiffuseLightProbes
BufferDiffuseProbes
DiffuseLightProbeSet
BufferDiffuseProbeSet
BufferDiffuseLightProbeSet (getting too long)
```
|
Some other points of comparison for terminology: Unity has Light Probes and Light Probe Groups. From the documentation:
Blender 2.8 has three types of light probe: Reflection Cubemaps, Reflection Planes, and Irradiance Volumes. Irradiance volumes contain a configurable number of probes, arranged automatically in a 2D or 3D grid. I would also consider
This brings us back to the question of where interpolation should occur, per #16270 (comment). If an object exists outside the renderer to represent a collection of light probes, I don't think interpolation on that collection should be happening inside the renderer. Alternatively, the collection could be an Object3D, rather than the individual probes, and interpolation will happen in the volume's local coordinate space. var probe1 = new THREE.LightProbe( sh1, pos1 ); // not an Object3D
var probe2 = new THREE.LightProbe( sh2, pos2 );
// ...
var probeVolume = new THREE.LightProbeVolume( [ probe1, probe2, ... ] ); // Object3D
scene.add( probeVolume );
mesh.material.lightProbes = true;
function animate () {
probeVolume.update( mesh ); // updates mesh.diffuseSH or mesh.material.diffuseSH?
renderer.render( scene, camera );
} |
I do not have a strong opinion on where the parent class exists. add to Scene or as a scene parameter. But It does not make much sense to move around a LightProbeVolume, thus I question why have it derived from Object3D. I like the name Volume better than Set. I also dislike Buffer prefix. The Diffuse light probe set will be compiled separately than the Specular. Thus I do think IrradianceVolume or DiffuseProbeVolume makes sense. IrradianceVolume is actually probably the most technically correct, it is just a mouthful and not obvious to new comers who will find it hard to relate DiffuseLightProbe with IrradianceVolume based on name alone. |
I was alluding to I think having a "compiler" of sorts is necessary. Having the |
The tetrahedral structure is technically a piece-wise linear discretization of space. Thus it is a volume at that point, not just implicitly. :) |
I suspect they can be one and the same – a tetrahedral structure could even be updated incrementally, with some additional work – but we can see how this progresses.
I can't think of a strong reason to move the volume either, but perhaps a user might want multiple volumes in the same scene, e.g. for dividing the scene into several zones? I'm speculating, though, and this would require the user to determine which volume updates a particular mesh... |
Followup in #16228. |
LightProbe
is a generalization ofAmbientLight
andHemisphereLight
.I have tried to keep the implementation as simple as possible for now. We can modify the implementation once we get consensus on what the API should be. This is a start. Here, I have treated
LightProbe
likeAmbientLight
.