Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Added support for light probes #16223

Merged
merged 4 commits into from
Apr 14, 2019
Merged

Conversation

WestLangley
Copy link
Collaborator

LightProbe is a generalization of AmbientLight and HemisphereLight.

I have tried to keep the implementation as simple as possible for now. We can modify the implementation once we get consensus on what the API should be. This is a start. Here, I have treated LightProbe like AmbientLight.

@WestLangley
Copy link
Collaborator Author

WestLangley commented Apr 12, 2019

As implemented, LightProbe is not a reflection probe. It is a probe of ambient light. With that in mind, here are some issues for discussion.

  1. The conversion from an environment (radiance) map to SH coefficients is currently CPU-based (LightProbe.setFromCubeTexture( cubeTexture )). A GPU solution may be warranted.

  2. Being able to also represent a light probe with a 16x16x6 cube map would be nice. We would need an SH-coefficients-to-(irradiance)-cube-map converter.

  3. In my view, light probe interpolation belongs on the CPU, at the application level. That means the shader only needs to support a single light probe, which represents the interpolated irradiance at the mesh location.

  4. Given (3), should we instead add a new property: mesh.lightProbe -- similar to mesh.envMap.

@mrdoob
Copy link
Owner

mrdoob commented Apr 12, 2019

Looking good! 😍

I see the current API looks like this:

lightProbe = new THREE.LightProbe( undefined, API.lightProbeIntensity );
lightProbe.setFromCubeTexture( cubeTexture );
scene.add( lightProbe );

What do you think about THREE.SphericalHarmonicsLight( intensity )?

light = new THREE.SphericalHarmonicsLight( API.lightProbeIntensity );
light.setFromCubeTexture( cubeTexture );
scene.add( light );

How common do you think would be to be able to pass custom SphericalHarmonics3 via the constructor? I suspect it's more likely the user will generate them like in the example here.

In the future, maybe the browser will supply SH with the estimated light when doing AR, but in that case I can see this pseudo-code being a common usage:

light = new THREE.SphericalHarmonicsLight();
light.setFromArray( WebXR.getLightEstimation() );
scene.add( light );

@mrdoob mrdoob added this to the r104 milestone Apr 12, 2019
@WestLangley
Copy link
Collaborator Author

How common do you think would be to be able to pass custom SphericalHarmonics3 via the constructor? I suspect it's more likely the user will generate them like in the example here.

I would expect the coefficients would be, saved, and reused. That is why I am not concerned that the baking method .setFromCubeTexture() is only JS-based.

@WestLangley
Copy link
Collaborator Author

WestLangley commented Apr 12, 2019

Personally, I prefer the name THREE.LightProbe to THREE.SphericalHarmonicsLight.

LightProbe should also support a 16x16x6 cube map representation. SH is only one way of encoding it.

@WestLangley
Copy link
Collaborator Author

/ping @bhouston, @richardmonette, @donmccurdy

@bhouston
Copy link
Contributor

bhouston commented Apr 12, 2019

I think all LightProbes should use SH for diffuse (because it is more efficient that low res cube maps for diffuse) and it is optionally to use a small cube map for specular. Maybe it could be implemented in a hierarchy?

LightProbe - supports SH only.
SpecularLightProbe - derived from LightProbe, uses SH for diffuse, and and adds support for a cube map for specular reflections? Replaces the need for envCubeMap that is currently used on materials.

The goal should be to remove any per-material cube maps for lighting or reflections (they are a pain to manage anyhow) and instead replace them with scene elements based on LightProbes that are applied globally.

I think that generally Light Probes could be included in the scene graph. And then there is a component that processes them, creates their reflection maps + SH, and then creates the tetrahedral grid, and then determines their intensities. I think it could be called WebGLLightProbes or something if we follow our current pattern.

LightProbes could have a .needsUpdate on them and if that is set the tetrahedral mapping will recalcuate and it will also recalcuate its SH + reflection map.

I would suggest that it is fully handled within WebGLRenderer and its subclasses -- this way it would be as easy to use as traditional Three.JS lights. Maybe it can be enabled/disabled on WebGLRenderer globally so that it doesn't search all the time for light probes to see if it should do something or not. It is an advanced feature people can turn on.

I wonder if a SpecularLightProbe could also be used for semi-accurate refraction mapping through a roughness surface (via the mipmap structure.) I think it could. We should plan for that.

@bhouston
Copy link
Contributor

@mrdoob, LightProbes are fundamentally different than lights. It is because they are a measurement of the light passing through a particular spot, not an emitter of light. Thus I would not call them a "Light", but I would strongly suggest calling them a "LightProbe". This is the term that Unity 3D uses: https://docs.unity3d.com/Manual/LightProbes.html

There are generally just two types of light probes: Diffuse, represented by SH, and Specular, represented by a low res cube map. One generally can not interpolate well Specular cube maps -- as it leads to double/tripling imaging, thus often you just pick the closest light probe for the specular reflection (I think, but maybe someone else should confirm), but you interpolate between the 4 closest light probe SHes for the diffuse component.

@WestLangley
Copy link
Collaborator Author

WestLangley commented Apr 12, 2019

From https://seblagarde.wordpress.com/2012/09/29/image-based-lighting-approaches-and-parallax-corrected-cubemap/

Having a lot of cubemaps require a lot of storage. Unlike irradiance cubemap which can be very low resolution (16x16x6 or 8x8x6) a specular cubemap is rather middle resolution (64x64x6 or 128x128x6). Higher resolution (256x256x6 or 512x512x6) may even be required for mirror objects.

@bhouston said:

I think all LightProbes should use SH for diffuse (because it is more efficient that low res cube maps for diffuse)

On the JS side, representing LightProbes with SH is more efficient than with a cube map. However, in the shader, it is my understanding that a texture look-up is faster. That is why I suggested we may want to support cube map representations for light probes -- at least in the shader implementation. But for now, this PR is fine.

Replaces the need for envCubeMap that is currently used on materials.

If I am not mistaken, we can use the term ReflectionProbe for a specular cube map. And yes, reflection probes would replace the need for mesh.material.envMap.

@donmccurdy
Copy link
Collaborator

donmccurdy commented Apr 12, 2019

I'd agree that LightProbe is a clearer name for how these objects collectively affect the scene.

I wonder if a SpecularLightProbe could also be used for semi-accurate refraction mapping through a roughness surface (via the mipmap structure.) I think it could. We should plan for that.

That hadn't occurred to me, but would be excellent if possible.

I would suggest that it is fully handled within WebGLRenderer and its subclasses -- this way it would be as easy to use as traditional Three.JS lights. ... It is an advanced feature people can turn on.

One complication is that light probes are not usually applied to static objects, as far as I know. Terrain will not receive meaningful lighting information from any single tetrahedron in the light probe volume. Selective lighting has been discussed before (#5180), or making light probe contributions optional at the material level would be sufficient. EDIT: I think I prefer the latter.

@bhouston
Copy link
Contributor

One complication is that light probes are not usually applied to static objects, as far as I know. Terrain will not receive meaningful lighting information from any single tetrahedron in the light probe volume. Selective lighting has been discussed before (#5180), or making light probe contributions optional at the material level would be sufficient.

You are correct that light probes for GI is not generally used on static objects. Normally the static parts of the scene will have baked light maps on them (which captures the incoming diffuse global illumination arriving on those surfaces at a low resolution which is incorporated into the object as a indirect diffuse contribution in our model.)

Thus one should basically category objects into dynamic objects that are lit via the LightProbes and the non-dynamic one that should have their GI baked into their light maps. I think it really is the objects themselves that are dynamic rather than their materials. Or at least it would be easier to manage that way.

At some point we should probably have a GI light baker as well (e.g. https://github.com/mem1b/lightbaking ) that uses light maps that are auto unwrapped like other game engines (#14053 ).

@bhouston
Copy link
Contributor

@WestLangley wrote:

However, in the shader, it is my understanding that a texture look-up is faster. That is why I suggested we may want to support cube map representations for light probes -- at least in the shader implementation. But for now, this PR is fine.

Could we put all the SHs for all light probes in the scene into a single data texture, all 1000 or so, and then we just have indices and weights passed into each material shader that needs them.

But if we are going to do a lot of cube maps we should also pack those into just a few large texture maps. Otherwise we will run into texture slot issues on low end mobile devices.

@bhouston
Copy link
Contributor

bhouston commented Apr 12, 2019

If I am not mistaken, we can use the term ReflectionProbe for a specular cube map. And yes, reflection probes would replace the need for mesh.material.envMap.

You are right.

In Unreal Engine you place reflection probes completely separately than light probes (which are automatic I understand): https://docs.unrealengine.com/en-US/Engine/Rendering/LightingAndShadows/ReflectionEnvironment

Unity 3D's reflection probe: https://docs.unity3d.com/Manual/class-ReflectionProbe.html

So apparently we need both.

@WestLangley
Copy link
Collaborator Author

I am suggesting baby steps.

In the long term, we can have LightProbe and RelectionProbe, and remove mesh.material.envMap.

Currently, with this PR, we have LightProbe, which is treated like AmbientLight, and mesh.material.envMap. Than means, both are assumed to be centered on the mesh.

Instead, we could have mesh.material.lightProbe and mesh.material.envMap. That way, it is clear the probe moves with the mesh... well, all meshes sharing that material. (Not necessary, just a suggestion to think about.)

As a next step, we can add an example where mesh.material.lightProbe is updated by the application layer as the mesh moves using some tetrahedral space.

After that, we can add reflection probes. That requires parallax correction, since the probes are not centered on the mesh.

@bhouston
Copy link
Contributor

bhouston commented Apr 12, 2019

I like the plan @WestLangley . Baby steps is the only way to get stuff done.

@bhouston
Copy link
Contributor

Instead, we could have mesh.material.lightProbe and mesh.material.envMap. That way, it is clear the probe moves with the mesh... well, all meshes sharing that material. (Not necessary, just a suggestion to think about.)

Well, it is weird if all meshes that share a material must share the same light probe. It really should be each mesh gets its own light probes on its materials based on that mesh's position in the scene.

@WestLangley
Copy link
Collaborator Author

Instead, we could have mesh.material.lightProbe and mesh.material.envMap. That way, it is clear the probe moves with the mesh... well, all meshes sharing that material.

@bhouston What do you think of this approach for now? We would later simultaneously remove envMap and lightProbe from material. Just a thought...

@mrdoob
Copy link
Owner

mrdoob commented Apr 13, 2019

Thanks for the explanations guys!

I was a bit confused because currently LightProbe extends Light. Maybe it should extend Object3D instead and we should create a new folder src/probes?

As per the constructor suggestion, I was basically trying to fix the current LightProbe( undefined, API.lightProbeIntensity ) usage.

@WestLangley
Copy link
Collaborator Author

WestLangley commented Apr 13, 2019

@mrdoob A light probe is a probe of ambient light. In our model, we rightly assume that a mesh located at a probe location will receive the same amount of ambient (indirect) light as the probe does.

AmbientLight is just a poor-man's light probe -- one that has constant irradiance in every direction. HemisphereLight is also a simple model for a probe.

We use the nomenclature "light" for AmbientLight and HemisphereLight even though they are not sources of light like a PointLight is. They measure irradiance, instead.

We could rename them to AmbientProbe and HemisphereProbe if you felt strongly about it.

Or, we can keep the AmbientLight nomenclature, and use that term instead of LightProbe -- we would just extend the capabilities of AmbientLight to encompass what LightProbe can do.

As far as the inheritance goes, I am open to any suggestions.

@WestLangley
Copy link
Collaborator Author

@mrdoob wrote

As per the constructor suggestion, I was basically trying to fix the current LightProbe( undefined, API.lightProbeIntensity ) usage.

I understand.

Currently, however, this pattern does work:

var hemiLightProbeSH = new THREE.SphericalHarmonics3().set( [
      new THREE.Vector3( 0.5908, 0.5908, 0.5908 ),
      new THREE.Vector3( 0.8506, 0.8506, 0.8506 ),
      new THREE.Vector3( 0, 0, 0 ),
      new THREE.Vector3( 0, 0, 0 ),
      new THREE.Vector3( 0, 0, 0 ),
      new THREE.Vector3( 0, 0, 0 ),
      new THREE.Vector3( -0.3642, -0.3642, -0.3642 ),
      new THREE.Vector3( 0, 0, 0 ),
      new THREE.Vector3( -0.6308, -0.6308, -0.6308 )
] );

var lightProbe = new THREE.LightProbe( hemiLightProbeSH, intensity );

I am open to any suggestions, of course.

@WestLangley
Copy link
Collaborator Author

Alternatively, we could do something like this:

var lightProbe = new THREE.LightProbe();

lightProbe.set( hemiLightProbeSH, intensity );

lightProbe.setFromCubeTexture( cubeTexture, intensity );

LightProbe extends Light. Maybe it should extend Object3D

Currently, the lightProbe.color is ignored. But it could be used to tint the probe.

If color is ignored, LightProbe could extend Object3D instead of Light.

@mrdoob
Copy link
Owner

mrdoob commented Apr 14, 2019

What do you think about copying AmbientLight's constructor for now?

function LightProbe( color, intensity ) {

	Light.call( this, color, intensity );

	this.sh = new SphericalHarmonics3();

	this.type = 'LightProbe';

}

@WestLangley
Copy link
Collaborator Author

WestLangley commented Apr 14, 2019

What do you think about copying AmbientLight's constructor for now?

Done!

We will have to decide if we want color to tint the probe when the probe is set from a cube texture. Currently color is ignored in that case.

@mrdoob
Copy link
Owner

mrdoob commented Apr 14, 2019

Sweet! Thanks!

@bhouston
Copy link
Contributor

bhouston commented Apr 15, 2019

We will have to decide if we want color to tint the probe when the probe is set from a cube texture. Currently color is ignored in that case.

If we want that it should be a function on SphericalHarmonics class: sh.multiplyByColor( red );

I really like @mrdoob's recommendation is that LightProbe is derived from Object3D. AmbientLightProbe, HemisphereLightProbe could both be implemented as DiffuseLightProbe that takes an SH. Then we actually get rid of code and special cases in the renderer. And then we also get SpecularLightProbe in the future.

I would have a separate light probe workflow in the WebGLRenderer that is separate from lights and it is all SH-based for diffuse interactions (DiffuseLightProbe) and cubemap-based for speculation interactions (SpecularLightProbe.)

The class hierarchy would look something like this:

  • LightProbe - this has no color information, just a placeholder with position.
    • DiffuseLightProbe (this is SH-based)
      • AmbientLightProbe (generalizes/replaces AmbientLight, a single color light probe)
      • HemisphereLightProbe (generalizes/replaced HemisphereLight, a 2 color light probe)
    • SpecularLightProbe (generalizes/replaces envCubemap)

@Usnul and @WestLangley I would just stick with SH inside of the diffuse light probe. It is simple and effective. I think trying to support both cube maps and SH will complexify the code. If we ever find that texture lookups are much faster, we can just convert the SH's into data textures where all SH's are stored in a single texture for a level.

(Completely off topic, probably at some point we should redo SphereicalHarmonics to have an Float32Array inside like Matrix4. But that will only become important once we have a lot of them.)

@WestLangley
Copy link
Collaborator Author

Thanks everyone for your feedback. I agree it is best to stick with SH-only for representing irradiance.

There are many issues I am working on to get the modeling correct regarding light probes.

I am also listening to the opinions of others to get consensus regarding the three.js light probe workflow, and in fact, a consensus as to what a three.js light probe represents.

Please, let's avoid refactoring the three.js class structures at this point.

If someone is interested in creating a CPU-based tessellation example, that can be done in parallel, and would be of interest, I expect.

@donmccurdy
Copy link
Collaborator

If someone is interested in creating a CPU-based tessellation example, that can be done in parallel, and would be of interest, I expect.

I'd be willing to try implementing CPU-based Delaunay tetrahedralization and probe interpolation. Let me see if I can make some progress this weekend.

@WestLangley
Copy link
Collaborator Author

@mrdoob @bhouston Regarding having LightProbe extend Object3D...

The only Object3D property that applies is .position, I think. On the other hand, Light.intensity and Light.color do apply.

So, why extend Object3D? Does LightProbe have to extend Object3D?

@bhouston
Copy link
Contributor

bhouston commented Apr 17, 2019 via email

@donmccurdy
Copy link
Collaborator

donmccurdy commented Apr 17, 2019

I'm not sure what we'd lose by removing .intensity and .color from the LightProbe class, actually. lightProbe.sh.scale( 0.5 ) or lightProbe.sh.multiplyColor( c ) and material.lightProbeIntensity provide the same functionality, with less complication of interpolation.

@WestLangley
Copy link
Collaborator Author

Why does color apply?

It doesn't have to. It just tints. I agree, it can be removed.

Why does intensity apply?

Light probe coefficients can be computed from a cubeMap. A cubeMap is unit-less. A cubeMap has an associated intensity, radiance.

The probe needs an intensity parameter, irradiance. The probe coefficients are unit-less.

@WestLangley
Copy link
Collaborator Author

WestLangley commented Apr 17, 2019

Can LightProbe not extend anything? Does it have to be added to the scene graph for any reason? Is a JS array of probes sufficient?

Certainly, the shader needs SH coefficients. So we need material.sh.coefficients, but that is just an array of interpolated values.

@bhouston
Copy link
Contributor

bhouston commented Apr 17, 2019 via email

@mrdoob
Copy link
Owner

mrdoob commented Apr 17, 2019

I agree that color and intensity doesn't seem to be needed. We can always add them later if needed.

Extending Object3D feels right to me, and considering that we want to eventually interpolate between different lightprobes, adding them to the scene seems right too.

@Usnul
Copy link
Contributor

Usnul commented Apr 17, 2019

I can't help but feel that Object3D turns slowly into a liability. Provided we have a performance implementation of light probes - many users will want to use a lot of them. A light probe by itself, if implemented as an SH with 9 floats needs following space:
coefficients: 9 * 4 bytes
position: 3 * (4 for Float32 or 8 for Float64)

that's 48 bytes per probe. Sure, this is a gross simplification, since there's a lot of overhead associated with a JS object under the hood. But last I checked, Object3D has a lot more

Object.defineProperty( this, 'id', { value: object3DId ++ } );
this.uuid = _Math.generateUUID();
this.name = '';
this.type = 'Object3D';
this.parent = null;
this.children = [];
this.up = Object3D.DefaultUp.clone();
var position = new Vector3();
var rotation = new Euler();
var quaternion = new Quaternion();
var scale = new Vector3( 1, 1, 1 );
function onRotationChange() {
quaternion.setFromEuler( rotation, false );
}
function onQuaternionChange() {
rotation.setFromQuaternion( quaternion, undefined, false );
}
rotation.onChange( onRotationChange );
quaternion.onChange( onQuaternionChange );
Object.defineProperties( this, {
position: {
configurable: true,
enumerable: true,
value: position
},
rotation: {
configurable: true,
enumerable: true,
value: rotation
},
quaternion: {
configurable: true,
enumerable: true,
value: quaternion
},
scale: {
configurable: true,
enumerable: true,
value: scale
},
modelViewMatrix: {
value: new Matrix4()
},
normalMatrix: {
value: new Matrix3()
}
} );
this.matrix = new Matrix4();
this.matrixWorld = new Matrix4();
this.matrixAutoUpdate = Object3D.DefaultMatrixAutoUpdate;
this.matrixWorldNeedsUpdate = false;
this.layers = new Layers();
this.visible = true;
this.castShadow = false;
this.receiveShadow = false;
this.frustumCulled = true;
this.renderOrder = 0;
this.userData = {};

by itself it has 24 properties and a bunch of owned objects.

lets have a look if they are applicable to a LightProbe

property type applicability
id number ✔ok
uuid string ➖ok
name string ➖"my favorite probe"? - ok
type string ?
parent Object3D ➖ok, if we want to support grouping
children Object3D[] ❌nope
up Vector3 ❌nope
position Vector3 ✔yep
rotation Euler ❌hard nope
quaternion Quaternion ❌same as rotation
scale Vector3 ❌nope
modelViewMatrix Matrix4 ➖ok?
normalMatrix Matrix3 ➖?
matrix Matrix4 ➖ok?
matrixWorld Matrix4 ➖ok?
matrixAutoUpdate boolean ➖ok?
matrixWorldNeedsUpdate boolean ➖ok?
layers Layers ➖ok
visible boolean ❌hard nope
castShadow boolean ❌nope
receiveShadow boolean ❌nope
frustumCulled boolean ✔ok
renderOrder boolean ❌nope
userData object ➖ok

@bhouston
Copy link
Contributor

bhouston commented Apr 17, 2019 via email

@Usnul
Copy link
Contributor

Usnul commented Apr 17, 2019

LightProbeSet and BufferedLightProbeSet anyone? :)

@bhouston
Copy link
Contributor

bhouston commented Apr 17, 2019 via email

@donmccurdy
Copy link
Collaborator

donmccurdy commented Apr 17, 2019

Some other points of comparison for terminology:

Unity has Light Probes and Light Probe Groups. From the documentation:

When editing a Light Probe Group, you can manipulate individual Light Probes in a similar way to GameObjects. However, Light Probes are not GameObjects; they are a set of points in the Light Probe Group component.

Blender 2.8 has three types of light probe: Reflection Cubemaps, Reflection Planes, and Irradiance Volumes. Irradiance volumes contain a configurable number of probes, arranged automatically in a 2D or 3D grid.

I would also consider LightProbeGroup, LightProbeVolume, and IrradianceVolume on the list above. I don't have a strong preference between any of these, except that I don't think the word "Buffer" is conceptually helpful in this API.

DiffuseProbes can be something that is set on the scene itself. Rather than Object3D structure.

This brings us back to the question of where interpolation should occur, per #16270 (comment). If an object exists outside the renderer to represent a collection of light probes, I don't think interpolation on that collection should be happening inside the renderer. Alternatively, the collection could be an Object3D, rather than the individual probes, and interpolation will happen in the volume's local coordinate space.

var probe1 = new THREE.LightProbe( sh1, pos1 ); // not an Object3D
var probe2 = new THREE.LightProbe( sh2, pos2 );
// ...

var probeVolume = new THREE.LightProbeVolume( [ probe1, probe2, ... ] ); // Object3D

scene.add( probeVolume );

mesh.material.lightProbes = true;

function animate () {
  probeVolume.update( mesh ); // updates mesh.diffuseSH or mesh.material.diffuseSH?
  renderer.render( scene, camera );
}

@bhouston
Copy link
Contributor

I do not have a strong opinion on where the parent class exists. add to Scene or as a scene parameter. But It does not make much sense to move around a LightProbeVolume, thus I question why have it derived from Object3D.

I like the name Volume better than Set. I also dislike Buffer prefix. The Diffuse light probe set will be compiled separately than the Specular. Thus I do think IrradianceVolume or DiffuseProbeVolume makes sense. IrradianceVolume is actually probably the most technically correct, it is just a mouthful and not obvious to new comers who will find it hard to relate DiffuseLightProbe with IrradianceVolume based on name alone.

@Usnul
Copy link
Contributor

Usnul commented Apr 17, 2019

I was alluding to Geometry and BufferGeometry, not a big fan of that word in this context either. I prefer Set or Group over Volume, as the volume is implicitly defined by the probes themselves.

I think having a "compiler" of sorts is necessary. Having the What'cha'ma'call'it light probe collection perform the role of said compiler makes sense, until there is a rationale for factoring it out into a separate LightProbe(Builder/Compiler/Manager/Engine/System/Thing).

@bhouston
Copy link
Contributor

I was alluding to Geometry and BufferGeometry, not a big fan of that word in this context either. I prefer Set or Group over Volume, as the volume is implicitly defined by the probes themselves.

The tetrahedral structure is technically a piece-wise linear discretization of space. Thus it is a volume at that point, not just implicitly. :)

@donmccurdy
Copy link
Collaborator

I think having a "compiler" of sorts is necessary. Having the ... probe collection perform the role of said compiler makes sense, until there is a rationale for factoring it out.

I suspect they can be one and the same – a tetrahedral structure could even be updated incrementally, with some additional work – but we can see how this progresses.

But It does not make much sense to move around a LightProbeVolume, thus I question why have it derived from Object3D.

I can't think of a strong reason to move the volume either, but perhaps a user might want multiple volumes in the same scene, e.g. for dividing the scene into several zones? I'm speculating, though, and this would require the user to determine which volume updates a particular mesh...

@donmccurdy
Copy link
Collaborator

I'd be willing to try implementing CPU-based Delaunay tetrahedralization and probe interpolation. Let me see if I can make some progress this weekend.

Followup in #16228.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants