-
-
Notifications
You must be signed in to change notification settings - Fork 35.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Access all scene triangles in fragment shader via Buffer Object? #1531
Comments
We do have buffers for each object but I'm not sure you can do raytracing in this way - shaders see just individual vertices. Every example I've seen so far does raytracing just on procedurally defined primitives, baked directly in shaders. Check this thread for previous discussion #509 |
By seeing individual vertices, do you mean the vertex shader seeing only its current vertex? |
Indeed.
The only way how to get data to shaders in WebGL are uniforms, attributes and textures (i.e. there are no uniform buffer objects or some other OpenGL things). Attributes are per vertex, uniforms and textures are per object. With uniforms, you only get something over 200 x vector4 (in ANGLE, with OpenGL you get more, but not by that much). So the only feasible option seems to be textures. |
Ah, thanks, that lays it out pretty well for me - didn't know WebGL doesn't have UBOs. Cheers for the clear advice, and if you have any other pointers or thoughts I'd be glad to hear them. Thanks. |
@WestLangley Thanks for pointing me here from #1572. So, if I understand correctly, I'll need to pass the geometry, as a DataTexture, as a uniform, to the shader. And then, I can have all objects have the same shaderMaterial, and do all ray-tracing calculations in the shader. It means that, I'll need to have one uniform for each object in the scene,right? Also, how do I extract the data from THREE.Geometry to the THREE.DataTexture? Thanks in advance. |
Hey, reopened the issue for you. I was thinking of doing the same before - having each object use a "raytracing shader" - but every approach I've seen uses a quad facing the camera as a virtual screen, with the raytracing taking place in its shader - see Evan Wallace's app for example - and now I'm leaning that way, passing information from a reference scene to it in DataTextures so it can draw it. The reference information needed is:
These DataTextures would go in a sort of heirarchy (see the section of the paper referenced above), and the screen shader would then have access to all the global information about the reference scene it needs. This approach may need some changes to the renderer - the reference scene must be updated, but not actually rendered (only the screen needs to be rendered) - and I'm not sure if it's a good way to do it or not; but as for your two questions:
|
Thanks a lot for the re-open. Good stuff here, I think I understand your idea. So, for the reference information, I'll load the geometry from file, then pass them as textures to the shader (like you said, seems like a good idea). But how to I get the matrices for each object I load? As for the changes on the renderer, will it really be needed? We can render the image when it finishes the shader, or is the shader asynchronous from the renderer? I will post the structure of the program I'm thinking tommorrow. |
No problem - out of curiosity, what kind of FPS were you getting in the javascript version, for how complex a scene? I've been rethinking my idea, and it'd be nice to avoid editing WebGLRenderer if possible - I think a ray tracing shader for each object would probably work, and there'd even be a small performance improvement in that none of the initial "eye" rays would miss the object. I have no idea whether there's a good reason no-one else I've seen does it that way, but it's worth a shot - most of the code would be transferable. The matrices are handled by THREE, in Object3D, and updated according to the values of Object3D.rotation, scale and position (see Object3D.updateMatrixWorld). The texture passing idea would take some work - you're basically using them as memory, with references from one to the other, and all in a hierarchical structure, all of which you need to devise in a way that makes sense ( again, I found the paper above helpful). This issue seems to be getting somewhat off the original topic, maybe it should be moved to a new one, I dunno. But good luck with your thing. |
I didn't count the fps, only the total time of loading the page. On a 500x500 frame, shooting a ray through each pixel and getting the intersections, takes 400s on average. The problem isn't really about the object's complexity, in this case it was about the number of rays that needed to be shot, since each ray is a different one, it takes a lot of time to javascript to load it. I think the problem of using a ray-tracing shader is that you can't transfer information from one shader to another, meaning that, after shading one object with the intersections you got, how would you shade it again when some other object refraction hits him again? From what I've learned, we can do it like this on the fragment shader:
But yeah.. I can see a lot of problems here. Setting all those colors wouldn't be an easy task (at least, not for me). It would be much easier using THREE.Ray.IntersectObjects(), getting the color, and just setting the color in the fragment shader. But again, javascript takes really too long. What I'm trying to do is something like this: |
By the way, here is the code that takes about 400s (on a 500x500 render size) just to cast the rays and get the interceptions: for(i=0;i<renderWidth;i++){ for(j=0;j<renderHeight;j++) { |
I'm not exactly sure what you mean here. The fragment shader accumulates the colour for each pixel based on ray-object intersections starting with the ray from the "eye" to its position - the pixels don't keep getting "painted" by other shaders or anything. That one fragment program accumulates the final colour for each pixel shown in that object, you wouldn't do it again. |
Looks like I'm getting it all wrong then. I thought the fragment shader was executed once for each pixel of the object which has the shaderMaterial? |
Yes, that's right.
What I'm saying is that you wouldn't need to "shade it again", as you were asking. Shading would happen once per visible object pixel, in the fragment shader. |
I was thinking I would need to shade it again, because of the situation when a ray hits an object, and then the refraction hits another one that has already been shaded? Sorry if I'm missing the point, getting a little confused here. |
Ah, I see. You're saying that the first object should accumulate the shaded colour of the second? The way you were saying would almost be like a breadth-first method - calculate the first bounce for everything, then the second bounce (using colour information from the first), and so on. I've never heard of ray tracing being done that way, don't know if it would even be possible. |
Actually it is the exact opposite, the second object should accumulate the color of the first, since the ray comes from it. EDIT: Check this out:http://www.zynaps.com/site/experiments/raytracer.html |
Sorry, yeah, that's what I meant. But yeah, seems to be the way it's done everywhere. Does have some limitations and difficulties of its own - any effects you want to be achieved have to be available to the global ray tracing method (or, at least, I have no idea how you would put a custom local shader in the mix somewhere and have it accurately reflected/refracted in other objects). |
That's pretty cool - yours? It's like POVRay. |
No, that's not mine. It's just an example in javascript, using web workers to run the rays in paralel. Could be an option too. |
Also since we are working on the same thing, please feel free to contact me by email if you have more ideas. I'll do the same. |
I think there's parallelism built in to the GPU side, though I'm not sure exactly to what degree - each fragment is independent of all other fragments though, so maybe it's at that kind of level. |
Ok then, you can close this one now. I see that you don't have an email on the profile, or am I looking on the wrong place? |
Put the email up there now, see you around. |
Hey.
I'm trying to implement a real-time ray tracer in THREE, running most of the computation in a fragment shader.
Having access to the scene geometry (just in the form of triangles, for simplicity, with material information too) and lights (a custom subclass in this case) from the shader is a core part of this - each pixel needs to check ray intersections against the surrounding geometry and lights to achieve the effect.
What I want to find out is whether there's an existing overall representation of the scene being constructed somewhere within THREE that I can just send down to the shader, or if I need to somehow process the scene myself.
What I'm looking for, ideally, is a WebGL Buffer Object(s) (or Uniform Buffer Object(s)) which I could just reference from within the shader.
Alternatively, where is the best place I could grab a hold of everything in the scene to construct one myself?
Thanks for any help you can offer.
The text was updated successfully, but these errors were encountered: