-
Notifications
You must be signed in to change notification settings - Fork 588
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support recording applications that use GPU via GL #2507
Comments
Fedora packages VirtualGL. Hopefully Ubuntu does too. That helps. Trying
so it looks like some amount of DRM ioctl support would still be required to use |
(But maybe I'm doing it wrong.) |
I misunderstood what VirtualGL does. An application using VirtualGL is still intended to use direct rendering. VirtualGL captures the resulting images and forwards them to another display. It is not intended to be a faster/more complete GLX, which is what we want. It does intercept GL calls so it might be usable as the basis for implementing a GL forwarding solution. |
This is possibly helpful: https://github.com/jrmuizel/rr-dataflow It shows how mesa softpipe renderer can be used to track pixel changes. |
I stumbled upon this ticket while considering a similar feature, but here I'd like to propose a different implementation. While graphics is currently considered as a kind of side effect in rr, for debugging it's better to have it as a part of program state, where we can see what's rendered up to the point we have replayed the program to. So to achieve that I propose that we integrate with some kind of graphics API call recording tool, such as RenderDoc or apitrace. These tools work by hooking the appropriate graphics APIs (OpenGL, Vulkan, etc rather than DRM), so what we need in rr is some way to record at graphics API level (rather than driver communication level) and disable syscall recording inside Mesa userland components. Does this sound feasible? (The proposal above only concerns recording so far, for replaying we need additional coding in RenderDoc etc. to restore the GPU state up to a snapshot.) As an optional addition, we can use some IPC-based graphics implementation to isolate potential undefined behavior, which is basically the virtualization idea initially described in this issue. |
Unfortunately not. A fundamental invariant of rr is that during replay we reproduce the userspace register and memory state of the recorded processes. It doesn't seem feasible to separate out the state of a particular library in an address space and say "this will be different during replay". For example, the library will very likely share a memory allocator with other libraries we do want to record, so diverging behaviour of the library will cause memory addresses in other libraries to diverge, which breaks rr. So, we could pick one or more open-source graphics drivers, like SVGA3D or Virgil, and support those directly --- if they don't share memory between userspace and the driver in ways that are too difficult to handle. Alternatively, we do a GLX-like thing that interposes on GL (or Vulkan, if we can make that work) and forwards everything to a process outside the recording. |
For Vulkan support, SwiftShader should work fine in CPU mode. |
One way to do this would be to pick one or more open-source GPU drivers, study their kernel/user interface, and support that directly in rr. VMWare's SVGA3D or QEMU's Virgil might be a good choice.
Another way, maybe better, would be to adapt https://virtualgl.org/ route GL calls through a pipe and record that.
The text was updated successfully, but these errors were encountered: