Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

GGX importance sampling #40

Closed
DU-jdto opened this issue Jul 24, 2019 · 17 comments
Closed

GGX importance sampling #40

DU-jdto opened this issue Jul 24, 2019 · 17 comments

Comments

@DU-jdto
Copy link

DU-jdto commented Jul 24, 2019

Not a bug, but a suggestion. In my local fork I swapped the indirect specular importance sampling to the VNDF sampling method described in this paper: http://jcgt.org/published/0007/04/01/paper.pdf . This resulted in an immediately apparent improvement in the appearance of indirect specular at low sample counts. The code I used is this:

brdf.glsl:

float G1_Smith(float roughness, float NdotL)
{
    float alpha = square(max(roughness, 0.02));
    return 2.0 * NdotL / (NdotL + sqrt(square(alpha) + (1.0 - square(alpha)) * square(NdotL)));
}

vec3 ImportanceSampleGGXVNDF(vec2 u, float roughness, vec3 Ve)
{
    float alpha = square(roughness);
    vec3 Vh = normalize(vec3(alpha * Ve.x, alpha * Ve.y, Ve.z));
	
    float lensq = square(Vh.x) + square(Vh.y);
    vec3 T1 = lensq > 0.0 ? vec3(-Vh.y, Vh.x, 0.0) * inversesqrt(lensq) : vec3(1.0, 0.0, 0.0);
    vec3 T2 = cross(Vh, T1);

    float r = sqrt(u.x);
    float phi = 2.0 * M_PI * u.y;
    float t1 = r * cos(phi);
    float t2 = r * sin(phi);
    float s = 0.5 * (1.0 + Vh.z);
    t2 = (1.0 - s) * sqrt(1.0 - square(t1)) + s * t2;

    // Tangent space H
    vec3 Nh = t1 * T1 + t2 * T2 + sqrt(max(0.0, 1.0 - square(t1) - square(t2))) * Vh;

    return vec3(alpha * Nh.x, max(0.0, Nh.z), alpha * Nh.y);
}

indirect_lighting.rgen:

				mat3 basis = construct_ONB_frisvad(normal);

				vec3 N = normal;
				vec3 V = direction;
				vec3 H = normalize(basis * ImportanceSampleGGXVNDF(rng3, primary_roughness, -vec3(dot(V, basis[0]), dot(V, basis[2]), dot(V, basis[1]))));
				vec3 L = reflect(V, H);

		        float NoL = max(0.0, dot(N, L));
		        float NoV = max(0.0, -dot(N, V));

		        if (NoL > 0.0 && NoV > 0.0)
		        {
		            float G = G1_Smith(primary_roughness, NoL);

		            // Incident light = SampleColor * NoL
		            // Microfacet specular = D*G*F / (4*NoL*NoV)
		            // pdf = D * NoH / (4 * VoH)
		            bounce_throughput *= G;

					bounce_throughput *= 1.0 / specular_pdf;
					is_specular_ray = true;
					bounce_direction = L;
				}

(The added NoV > 0 condition is required - for a while I was getting subtly different results with this importance sampling in reference mode, until I eventually figured out that this was due to me having excluded this condition)

EDIT: Oh, I stopped using the Schlick approximation for the G term a while back, hence the form of my G1_Smith function. Obviously the formula used should be consistent with the one used in the G_Smith_over_NdotV function.

@DU-jdto
Copy link
Author

DU-jdto commented Jul 24, 2019

I disabled both direct and indirect specular anywhere NdotV is <= 0, and immediately discovered that certain surfaces have weird normals:

quake098

quake099

It appears that the black parts of both of these surfaces have normals that all point in the same arbitrary direction. Given that the albedo is completely black in both cases, my best guess is that once upon a time someone decided that the normals didn't matter because the surface would be black anyway, but the metallic value would need to be 1.0 for that to be the case for specular.

EDIT: For the moment I've put a bandaid on the issue by forcing metallic to 1.0 when the albedo is vec3(0) in get_material.

EDIT2: I am quickly discovering how annoying backfacing shading normals are. I don't suppose there are any plans to implement displacement mapping? :P

@apanteleev
Copy link
Collaborator

This looks like a good suggestion, thanks! I'll try to look into it in more detail soon.

There are no plans for displacement mapping for sure. We have discussed adding parallax mapping, which seems possible, but it would need to be performed for every ray that interacts with the primary surface, including shadow and bounce rays. Which would be expensive.

The textures are often really bad, I agree. Some just have to be fixed for the next release, because they look obviously broken after the recent BRDF improvements. Like the horizontal metal grates in base1.

@DU-jdto
Copy link
Author

DU-jdto commented Jul 26, 2019

Since submitting this issue I have removed the NoV > 0 condition and instead whenever I assign a shading normal I now ensure that N.V is > 0 and replace N with the geometric normal if not. This seems to work reasonably well for both diffuse and specular, though there are still other things I might try.

EDIT: I ended up going with:

vec3 front_normal(vec3 N, vec3 V)
{
    float NoV = dot(N, V);
    if (NoV <= -0.0002)
        return N;
    vec3 T = normalize(N - V * NoV);
    return normalize(T - V * 0.0002);
}

@SkacikPL
Copy link

Not strictly normal map related but i would guess that specular maps would require similar treatment as certain parts still get specular highlight even if they're supposed to imitate "deep holes". Not sure how widespread this is and whether simple texture tweak or some smart on the fly tweak would be more viable.

Quake 2 RTX Remaster Screenshot 2019 08 10 - 14 20 45 56

Quake 2 RTX Remaster Screenshot 2019 08 10 - 14 22 26 44

@DU-jdto
Copy link
Author

DU-jdto commented Aug 10, 2019

@SkacikPL Yeah, the change I made to force metallic to 1 when albedo is 0 was to address just that issue. Unfortunately, it also produced some false positives, so I've reverted the change in my local fork for now.

@apanteleev
Copy link
Collaborator

Many materials are being improved to work well with the recent BRDF and filtering improvements. A couple of WIP examples (non-reference renders):

image

image

image

@DU-jdto
Copy link
Author

DU-jdto commented Aug 19, 2019

Those look fantastic! 😄

@SkacikPL
Copy link

SkacikPL commented Aug 19, 2019

Yeah that looks quite good.

On a completely unrelated note (perhaps?) - i've noticed that player hand looks completely black when holding hyperblaster. Not sure what the problem is here but i'd assume it's some sort of asset issue, so if they're currently getting another pass - it might be worth to take a look at it.

It does seem to receive only a tiny fraction of GI at certain points but that's about it.
Quake 2 RTX Remaster Screenshot 2019 08 19 - 22 35 19 22

@apanteleev
Copy link
Collaborator

I remember seeing something like this before, but the current version looks fine. And there were no hyperblaster asset changes since version 1.0.

image

@DU-jdto
Copy link
Author

DU-jdto commented Aug 21, 2019

Another suggestion: For a whole load of random roughness values and random unit vectors V and L on the hemisphere around N, I've compared the result of calculating the GGX G term a) using the Schlick approximation with k = (roughness+1)^2 / 8 (as in Q2RTX currently), b) using the Schlick approximation with k = roughness^2 / 2 (as suggested at http://graphicrants.blogspot.com/2013/08/specular-brdf-reference.html), c) using the seperable form of the GGX Smith masking function and d) using the height-correlated form of the GGX Smith masking function (as defined in http://jcgt.org/published/0003/02/03/paper.pdf, this one is the most accurate). In every comparison, (d) >= (c) >= (b) >= (a), with the current version (a) being more inaccurate the lower the roughness and in the absolute best case where roughness=1 merely being equivalent to (b) and (c).
As such, it seems the G term could be made more accurate at no cost to efficiency or complexity by simply replacing the 'k' term in the 'G_Smith_over_NdotV' function with

float k = square(roughness) / 2;

EDIT: Also, is there any particular reason to force the Fresnel term to 0 when roughness is 1? So far as I can see none of the math should break, there's nothing physically inconsistent about a maximally rough surface having a specular component, and forcing specular to 0 for such surfaces currently makes highly rough surfaces like many floor tiles look a bit odd. I made the change to allow roughness=1 surfaces to have a specular component some time ago in my local fork (albeit with a different Fresnel calculation, but I don't see any reason it shouldn't work fine with the current one), and overall I think it has made the image look more natural.

EDIT2: A screenshot comparison of the four forms of the GSF (external links because the files are too large for GitHub):

Schlick k = (roughness + 1)^2 / 8 (current)

float G2_Smith(float roughness, float NdotV, float NdotL)
{
	float k = square(roughness + 1) / 8;
	float g1 = NdotL / (NdotL * (1 - k) + k);
	float g2 = NdotV / (NdotV * (1 - k) + k);
	return g1 * g2;
}

https://cdn.discordapp.com/attachments/416498070008889354/614392494066761729/quake136.png

Schlick k = roughness^2 / 2

float G2_Smith(float roughness, float NdotV, float NdotL)
{
	float k = square(roughness) / 2;
	float g1 = NdotL / (NdotL * (1 - k) + k);
	float g2 = NdotV / (NdotV * (1 - k) + k);
	return g1 * g2;
}

https://cdn.discordapp.com/attachments/416498070008889354/614392532620541973/quake137.png

GGX Seperable

float G2_Smith(float roughness, float NdotV, float NdotL)
{
	float alpha = square(roughness);
	float g1 = 2.0 * NdotL / (NdotL + sqrt(square(alpha) + (1.0 - square(alpha)) * square(NdotL)));
	float g2 = 2.0 * NdotV / (NdotV + sqrt(square(alpha) + (1.0 - square(alpha)) * square(NdotV)));
	return g1 * g2;
}

https://cdn.discordapp.com/attachments/416498070008889354/614392585414377501/quake138.png

GGX Height-Correlated

float G2_Smith(float roughness, float NdotV, float NdotL)
{
	float alpha = square(roughness);
	float g1 = NdotV * sqrt(square(alpha) + (1.0 - square(alpha)) * square(NdotL));
	float g2 = NdotL * sqrt(square(alpha) + (1.0 - square(alpha)) * square(NdotV));
	return 2.0 * NdotL * NdotV / (g1 + g2);
}

https://cdn.discordapp.com/attachments/416498070008889354/614392633791348740/quake139.png

I use the GGX Height-Correlated version in my fork since it can have a noticeable effect on the appearance of rough surfaces and the performance hit is negligible (though I'm sure it'd be larger on Turing, I use a Pascal 1080 Ti which naturally tends to be heavily bottlenecked by the ray tracing itself).

EDIT3: Also, if the VNDF sampling isn't implemented, I would suggest forcing pt_ndf_trim to 1 in reference mode (just as the other noise-reducing hacks are disabled in reference mode). If the VNDF sampling is implemented, there shouldn't be a reason to keep pt_ndf_trim anyway.

@apanteleev
Copy link
Collaborator

Thanks a lot @DU-jdto!

I've integrated your VNDF sampling suggestions and the Height-Correlated G2 function. VNDF helps significantly with reference mode convergence on rougher materials, but doesn't affect the real-time mode much because of the fake specular.

Regarding the roughness=1 case, you're right, nothing explodes when I keep specular reflections on these surfaces, but those materials lose contrast, look washed out.

@DU-jdto
Copy link
Author

DU-jdto commented Aug 27, 2019

VNDF helps significantly with reference mode convergence on rougher materials, but doesn't affect the real-time mode much because of the fake specular.

Makes sense. I wonder if you could get away with a slightly higher default value for pt_fake_roughness_threshold now? (Probably not)

@DU-jdto
Copy link
Author

DU-jdto commented Aug 28, 2019

Tried it. In my limited testing, increasing pt_fake_roughness_threshold doesn't really seem feasible (the denoiser is really unforgiving when it comes to that cvar...), but I didn't notice any issues setting pt_specular_anti_flicker to 0, and it did produce a small benefit to a few (but not many) surfaces.

@SkacikPL
Copy link

SkacikPL commented Aug 28, 2019

It's not the field i'm anyhow competent in - for the most part changes are great in bringing out the specular channel. However it kinda highlights the need for better temporal filter.

Right now anything with high specularity really stands out in a good way, however once you start moving, those objects quickly loose majority of the detail, which reappear once you stabilize the camera for a second or two.

This is just an armchair expert feedback but i hope some specific improvements are on the agenda.

@DU-jdto
Copy link
Author

DU-jdto commented Sep 28, 2019

Regarding the roughness=1 case, you're right, nothing explodes when I keep specular reflections on these surfaces, but those materials lose contrast, look washed out.

I finally got around to trying this with the official codebase rather than my fork and yeah, I found the same thing. Interestingly, this is much less the case in my fork, so I guess the Fresnel term differences are more relevant than I had assumed.

@apanteleev
Copy link
Collaborator

Interesting. What are you doing for the Fresnel term?

@DU-jdto
Copy link
Author

DU-jdto commented Sep 29, 2019

For specular, I'm using the Schlick approximation with VdotH (where H is the halfway vector / microsurface normal). For diffuse, I'm using the Schlick approximation with NdotL. In neither case am I doing any roughness adjustment (in the specular case at least, roughness is implicitly accounted for by the use of the microsurface normal in the calculation). I'm happy with the way the specular term is handled, but not the diffuse one. Ideally for diffuse I'd want to calculate an average Fresnel over all possible microfacet normals weighted by the NDF, since unlike in the specular case with a diffuse interaction the outgoing light direction gives no information about the orientation of the microfacets encountered by the incoming light ray. However, deriving a closed form expression for this seems likely to be a bit beyond me. Failing that, I may reintroduce roughness adjustment for the diffuse fresnel term (but not the specular one).

For a time I did move away from the Schlick approximation in favor of the actual Fresnel equations, but I went back to Schlick after I noticed that it actually more closely follows real world reflectivity curves for metals when metals are approximated the way they are in Q2RTX (ie basically treated as highly reflective dielectrics rather than true conductors with complex refractive indices).

Obviously my approach complicates things quite a bit and doesn't integrate well with things like the fake specular, which are among the reasons why I've never suggested changing the Fresnel term here. When using just a single Fresnel term, the roughness-adjusted calculation used by Q2RTX seems to work very well (much better than Schlick with NdotV and no roughness adjustment). Do you happen to know where that expression comes from (e.g. is there some paper that explains it)?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants