-
Notifications
You must be signed in to change notification settings - Fork 186
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
GGX importance sampling #40
Comments
I disabled both direct and indirect specular anywhere NdotV is <= 0, and immediately discovered that certain surfaces have weird normals: It appears that the black parts of both of these surfaces have normals that all point in the same arbitrary direction. Given that the albedo is completely black in both cases, my best guess is that once upon a time someone decided that the normals didn't matter because the surface would be black anyway, but the metallic value would need to be 1.0 for that to be the case for specular. EDIT: For the moment I've put a bandaid on the issue by forcing metallic to 1.0 when the albedo is vec3(0) in get_material. EDIT2: I am quickly discovering how annoying backfacing shading normals are. I don't suppose there are any plans to implement displacement mapping? :P |
This looks like a good suggestion, thanks! I'll try to look into it in more detail soon. There are no plans for displacement mapping for sure. We have discussed adding parallax mapping, which seems possible, but it would need to be performed for every ray that interacts with the primary surface, including shadow and bounce rays. Which would be expensive. The textures are often really bad, I agree. Some just have to be fixed for the next release, because they look obviously broken after the recent BRDF improvements. Like the horizontal metal grates in base1. |
Since submitting this issue I have removed the NoV > 0 condition and instead whenever I assign a shading normal I now ensure that N.V is > 0 and replace N with the geometric normal if not. This seems to work reasonably well for both diffuse and specular, though there are still other things I might try. EDIT: I ended up going with:
|
Not strictly normal map related but i would guess that specular maps would require similar treatment as certain parts still get specular highlight even if they're supposed to imitate "deep holes". Not sure how widespread this is and whether simple texture tweak or some smart on the fly tweak would be more viable. |
@SkacikPL Yeah, the change I made to force metallic to 1 when albedo is 0 was to address just that issue. Unfortunately, it also produced some false positives, so I've reverted the change in my local fork for now. |
Those look fantastic! 😄 |
Another suggestion: For a whole load of random roughness values and random unit vectors V and L on the hemisphere around N, I've compared the result of calculating the GGX G term a) using the Schlick approximation with k = (roughness+1)^2 / 8 (as in Q2RTX currently), b) using the Schlick approximation with k = roughness^2 / 2 (as suggested at http://graphicrants.blogspot.com/2013/08/specular-brdf-reference.html), c) using the seperable form of the GGX Smith masking function and d) using the height-correlated form of the GGX Smith masking function (as defined in http://jcgt.org/published/0003/02/03/paper.pdf, this one is the most accurate). In every comparison, (d) >= (c) >= (b) >= (a), with the current version (a) being more inaccurate the lower the roughness and in the absolute best case where roughness=1 merely being equivalent to (b) and (c).
EDIT: Also, is there any particular reason to force the Fresnel term to 0 when roughness is 1? So far as I can see none of the math should break, there's nothing physically inconsistent about a maximally rough surface having a specular component, and forcing specular to 0 for such surfaces currently makes highly rough surfaces like many floor tiles look a bit odd. I made the change to allow roughness=1 surfaces to have a specular component some time ago in my local fork (albeit with a different Fresnel calculation, but I don't see any reason it shouldn't work fine with the current one), and overall I think it has made the image look more natural. EDIT2: A screenshot comparison of the four forms of the GSF (external links because the files are too large for GitHub): Schlick k = (roughness + 1)^2 / 8 (current)
https://cdn.discordapp.com/attachments/416498070008889354/614392494066761729/quake136.png Schlick k = roughness^2 / 2
https://cdn.discordapp.com/attachments/416498070008889354/614392532620541973/quake137.png GGX Seperable
https://cdn.discordapp.com/attachments/416498070008889354/614392585414377501/quake138.png GGX Height-Correlated
https://cdn.discordapp.com/attachments/416498070008889354/614392633791348740/quake139.png I use the GGX Height-Correlated version in my fork since it can have a noticeable effect on the appearance of rough surfaces and the performance hit is negligible (though I'm sure it'd be larger on Turing, I use a Pascal 1080 Ti which naturally tends to be heavily bottlenecked by the ray tracing itself). EDIT3: Also, if the VNDF sampling isn't implemented, I would suggest forcing pt_ndf_trim to 1 in reference mode (just as the other noise-reducing hacks are disabled in reference mode). If the VNDF sampling is implemented, there shouldn't be a reason to keep pt_ndf_trim anyway. |
Thanks a lot @DU-jdto! I've integrated your VNDF sampling suggestions and the Height-Correlated G2 function. VNDF helps significantly with reference mode convergence on rougher materials, but doesn't affect the real-time mode much because of the fake specular. Regarding the roughness=1 case, you're right, nothing explodes when I keep specular reflections on these surfaces, but those materials lose contrast, look washed out. |
Makes sense. I wonder if you could get away with a slightly higher default value for pt_fake_roughness_threshold now? (Probably not) |
Tried it. In my limited testing, increasing pt_fake_roughness_threshold doesn't really seem feasible (the denoiser is really unforgiving when it comes to that cvar...), but I didn't notice any issues setting pt_specular_anti_flicker to 0, and it did produce a small benefit to a few (but not many) surfaces. |
It's not the field i'm anyhow competent in - for the most part changes are great in bringing out the specular channel. However it kinda highlights the need for better temporal filter. Right now anything with high specularity really stands out in a good way, however once you start moving, those objects quickly loose majority of the detail, which reappear once you stabilize the camera for a second or two. This is just an armchair expert feedback but i hope some specific improvements are on the agenda. |
I finally got around to trying this with the official codebase rather than my fork and yeah, I found the same thing. Interestingly, this is much less the case in my fork, so I guess the Fresnel term differences are more relevant than I had assumed. |
Interesting. What are you doing for the Fresnel term? |
For specular, I'm using the Schlick approximation with VdotH (where H is the halfway vector / microsurface normal). For diffuse, I'm using the Schlick approximation with NdotL. In neither case am I doing any roughness adjustment (in the specular case at least, roughness is implicitly accounted for by the use of the microsurface normal in the calculation). I'm happy with the way the specular term is handled, but not the diffuse one. Ideally for diffuse I'd want to calculate an average Fresnel over all possible microfacet normals weighted by the NDF, since unlike in the specular case with a diffuse interaction the outgoing light direction gives no information about the orientation of the microfacets encountered by the incoming light ray. However, deriving a closed form expression for this seems likely to be a bit beyond me. Failing that, I may reintroduce roughness adjustment for the diffuse fresnel term (but not the specular one). For a time I did move away from the Schlick approximation in favor of the actual Fresnel equations, but I went back to Schlick after I noticed that it actually more closely follows real world reflectivity curves for metals when metals are approximated the way they are in Q2RTX (ie basically treated as highly reflective dielectrics rather than true conductors with complex refractive indices). Obviously my approach complicates things quite a bit and doesn't integrate well with things like the fake specular, which are among the reasons why I've never suggested changing the Fresnel term here. When using just a single Fresnel term, the roughness-adjusted calculation used by Q2RTX seems to work very well (much better than Schlick with NdotV and no roughness adjustment). Do you happen to know where that expression comes from (e.g. is there some paper that explains it)? |
Not a bug, but a suggestion. In my local fork I swapped the indirect specular importance sampling to the VNDF sampling method described in this paper: http://jcgt.org/published/0007/04/01/paper.pdf . This resulted in an immediately apparent improvement in the appearance of indirect specular at low sample counts. The code I used is this:
brdf.glsl:
indirect_lighting.rgen:
(The added NoV > 0 condition is required - for a while I was getting subtly different results with this importance sampling in reference mode, until I eventually figured out that this was due to me having excluded this condition)
EDIT: Oh, I stopped using the Schlick approximation for the G term a while back, hence the form of my G1_Smith function. Obviously the formula used should be consistent with the one used in the G_Smith_over_NdotV function.
The text was updated successfully, but these errors were encountered: