-
Notifications
You must be signed in to change notification settings - Fork 472
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
CEF3: Support rendering to a hardware GL/D3D texture/surface provided by the client #1006
Comments
Original comment by Anonymous. Comment 2. originally posted by hele@splitmedialabs.com on 2014-02-14T05:30:14.000Z: Hi , I just wanted to follow up on this issues. I remember we had some good discussions back in issue #518 about a fairly simple approach to support GPU acceleration for off screen rendering on Windows by using shared surfaces. Any idea on timeline here ? |
Original comment by Anonymous. Comment 3. originally posted by efrencd on 2014-02-19T19:11:57.000Z: Hello, here it is a very interesting article about the Desktop Window Manager in Windows Vista, 7 and 8. http://en.wikipedia.org/wiki/Desktop\_Window\_Manager It seems in Windows Vista and beyond, all windows are actually textures that are rendered to an offscreen buffer and then put in a 3d surface (actually a D3D surface) representing that window. Taking that into account I guess there must be an easy way to get the whole content (harware accelerated) from CEF and use it as a texture so that we can embed it in our own apps (see awesomium.com) Probably this will only work in Windows Vista and beyond and not in Windows XP. |
Original comment by Anonymous. Comment 4. originally posted by MGusarenko on 2014-04-07T08:25:34.000Z: I am working on patch for HW offscreen render support, description will be avail at CEF forum as soon as premoderated. Everyone who intrested in can join. |
Original changes by Anonymous.
|
Comment 5. originally posted by magreenblatt on 2014-04-07T14:49:43.000Z: @ commentcomment 4.: The forum post is http://magpcss.org/ceforum/viewtopic.php?f=8&t=11635. |
Original comment by Anonymous. Comment 6. originally posted by jarred.nicholls on 2014-04-08T19:38:14.000Z: Amazing re: the HW accelerated off screen rendering! It really helps my use case with integrating CEF into QtQuick/QML scene graph without compromising. |
Comment 7. originally posted by magreenblatt on 2014-06-28T00:10:37.000Z: There are currently two use cases for hardware acceleration:
comment 1. will be addressed by issue comment 12.57 which re-implements off-screen rendering support in trunk using the delegated renderer. comment 2. will be left to this issue. |
Comment 8. originally posted by magreenblatt on 2014-06-28T00:12:53.000Z: |
Comment 9. originally posted by magreenblatt on 2014-10-23T17:31:24.000Z: |
Comment 11. originally posted by magreenblatt on 2014-11-07T22:00:18.000Z: It's likely best to implement this at the Ozone layer. See http://crbug.com/431397 for details. |
Original comment by Anonymous. Comment 12. originally posted by efrencd on 2015-03-01T13:39:33.000Z: Hello, any plans or aproximate date in which this feature will be added? thanks. |
Original comment by Anonymous. Comment 13. originally posted by ameggs on 2015-03-02T22:55:56.000Z: My (possibly incorrect or outdated?) understanding is that Chromium renders to GPU-side 256x256 pixel tiles, rather than a single buffer that's the full size of the visible browser window. When something on the page changes, only tiles that are changed get redrawn. At display time, something called the "ubercompositor" takes the current list of tiles and merges it with browser-side UI things. The only time the entire web page is drawn into a single buffer is when the tiles are combined with that UI for the final render. I'm basing this off these two documents; if I'm wrong I welcome the correction: https://docs.google.com/document/d/1tkwOlSlXiR320dFufuA\_M-RF9L5LxFWmZFg5oW35rZk https://docs.google.com/document/d/1ziMZtS5Hf8azogi2VjSE6XPaMwivZSyXAIIp0GgInNA But if my understanding is correct, then implementing this as described would mean chromium or CEF would be rendering all its tiles to a buffer we provided, and then we'd render that buffer into our own view. But if the goal here is to reduce latency and copies, maybe the best way to achieve this isn't for CEF to render to a texture/surface that we provide. Instead, just expose the current tile set to us. We can send those tiles into our own 3D renderer as part of our own independently-running update/draw/present loop. This would be especially appreciated by game developers, because we're frequently GPU-limited, and anything that reduces the number of GPU-side copies would help. |
Samsung has (very indirectly) proposed a new API for hardware-accelerated off-screen rendering in CEF: http://blogs.s-osg.org/servo-the-countdown-to-your-next-browser-continues/ ("Composite Smarter, Not Harder" section). Also, using in-process-gpu in combination with a custom OutputSurface seems like a promising implementation approach. Related conversation: https://groups.google.com/a/chromium.org/d/msg/graphics-dev/cnr8n3mdRFc/2n7QrPAaAQAJ |
Original comment by Cef Gluist (Bitbucket: cefgluist). Intented updated patches to Chromium41/CEFr2272. |
Original changes by Cef Gluist (Bitbucket: cefgluist).
|
Original comment by Cef Gluist (Bitbucket: cefgluist). Hi everybody. I've updated MGusarenko's patch to Cef rev 2272 / Chromium 41. It SHOULD work. A friend of mine is currently trying to build/integrate it and we should get more feedback soon. Otherwise just thought I'd drop this here, perhaps MGreenblatt or MGusarenko would be curious to have a quick look. Perhaps I've overlooked something. Regards, |
@cefgluist : The OnAcceleratedPaint interface no longer exists in Chromium and consequently the implementation that you propose can no longer be used with current CEF versions. |
|
Original comment by Emmanuel ROCHE (Bitbucket: ultimamanu). Hello everyone, I've been watching for this issue since quite a long time now, and I was hoping someone else would find the time to handle it and save me the trouble... But well, I guess that's not how life works ;-). In my company, we have been using CEF quite intensively since more than 1 year now, and we reached a point where we really needed to squeeze out more performances out of this offscreen rendering mechanism. So a couple of months ago I decided I should stop waiting, try this myself and find a working solution (at least for us...). And now, I'm glad to annonce that, I finally made it (it was incredibly complex sure, but that's OK, because I learned a lot in the process, and now...): I have a working mechanism to render on a DirectX surface directly on the GPU (ie. without first copying the CEF rendered textures on a Skia bitmap on the CPU). Note that this work is based on the CEF branch 3163, which is not that old, so I think it should not be too hard to integrate it in the current version of CEF (if applicable ?). Of course, there are some limitations: the system I'm using is only designed for usage on Windows (only tested with Windows 10 so far, and I'm most probably missing some platform dependent constrains and checks), and only allow direct copy on DirectX surfaces (Tested with DirectX 9 so far as this is what we need, but I'm pretty sure it would work the same with DirectX 10 or 11... no idea about DX 12). So for those interested in a linux support, this is not good enough yet (but might maybe be used as an inspiration source ?). For those interested in copying on a GL surface on Windows, then this might still help: because one can use the DirectX surface "layer" to bridge the gap between a given project/software process and the CEF GPU service process (in fact I'm not sure this could be done another way with GLES/OpenGL [except if you run the GPU service directly in the browser process ?]). And then maybe more easily share the DirectX surface with an OpenGL context from a single process (ie. your software process) ? Basically, the idea I'm using here has been around for a while, but as far as I know no one actually tried to implement it (?). It is simply using the fact that CEF is using ANGLE on Windows to convert the GLES layer to a DirectX layer. And ANGLE supports interop with DirectX out of the box (cf. for instance https://github.com/Microsoft/angle/wiki/Interop-with-other-DirectX-code). So from there:
-> And that's it! No need for additional synchronization, no need to send anything back to the client from the service process, the DirectX surface will continuously get the updated compositor surface with a simple quad rendering, and we don't have to copy anything on the CPU memory! :-) Hmmm, and now that I think about, there are also other minor limitations I should mention:
Anyway, I'm planning to spend a few more days cleaning/validating the current code, and then I will write an article about this work and post some initial version of the updated files here so that the community can have a look and see if this can be of any help to you. For now, I just wanted to post a "little" teaser to get you excited (hopefully) ;-)! |
Original comment by Mikael Hermansson (Bitbucket: mihe, GitHub: mihe). @ultimamanu That's fantastic news! I've recently been looking into doing something similar (with only Windows and Direct3D in mind), but got stuck when trying to grasp the full pipeline of the ANGLE textures to
There was a patch released back in 2014 that did something similar. We (where I worked at the time) made use of it back then and we ended up shipping with it eventually. We did have issues with the GPU blacklist inside of Chromium messing things up for certain users, resulting in a completely black texture for them. Being able to fall back on the software implementation is very useful in those kinds of scenarios, even if it's just an explicit flag that the user has to pass in to deactivate the hardware accelerated path. As mentioned in that thread, Chromium replaced their compositor shortly thereafter, so you were pretty much stuck with that revision of CEF if you wanted hardware acceleration. Anyway, I'd love to get my hands on a some patch/diff files to try this out myself and give feedback, even with all the rough edges. Ideally it would be nice to have a GitHub or Bitbucket fork of CEF specifically for this, until it can get merged into CEF itself (if that's even possible with only Direct3D support). |
Original comment by Emmanuel ROCHE (Bitbucket: ultimamanu). Hi everyone, As mentioned in my previous note above, I can now provide some initial files you could use on your side to try to move forward with this issue I've stored those files on the following minimal github repo: https://github.com/roche-emmanuel/cef_direct3d_offscreen_rendering Also I have a blog article describing the basics of how this patch is supposed to work: http://wiki.nervtech.org/doku.php?id=blog:2017:1130_cef_direct_copy_to_d3d And if you have any question, you can still mention me here or reach me in any other way. On my side I keep testing and polishing to see where this will lead me... |
Original comment by Mikael Hermansson (Bitbucket: mihe, GitHub: mihe). @ultimamanu Awesome, I'll be trying out your changes in the coming days. I'll let you know how it goes. Also, I'm not a lawyer or any sort of expert on open-source licensing, but you might want to look into bundling a If you don't assign a license of some form there might be legal issues with making use of your code in production. Somebody more well-versed in licensing might be able to shed some light on this though. |
Original comment by Emmanuel ROCHE (Bitbucket: ultimamanu). Hi @mihe Thanks for the tip on the licensing question... You're right this is something I should add on the repo: but I just have no idea yet what content I should put in there :-) I'll be looking around. (But of course I want to share this with everyone, for any kind of usage and/or modification, etc) And yes please, let me know how this works for you! I already noticed something wrong with this system when used in my own production software: currently, this is generating a lot of rendering passes each second (for the content I'm rendering at least), and in my client process I'm also using a regular "on-screen" CEF window with some other content: this on-screen window will then eventually report a "WebGL context lost" error everything will freeze for a second or tow (looks like a GPU service crash ?)... I'm investigating this. |
Original comment by Emmanuel ROCHE (Bitbucket: ultimamanu). Hi guys, So I just added a License file on the github repo I mentioned above (template taken from https://github.com/LiuLang/cef). But I'm not quite sure this is what is expected or not, so if anyone has anything better to suggest please let me know. Thanks! |
@ultimamanu Thanks for working on this and sharing your findings. Would you mind submitting your CEF/Chromium changes as a PR against 3163 branch? That would make it easier to view, test and comment on your changes. The Chromium changes can be applied using a patch file (see cef/patch/README.txt for details). General PR creation docs are here: https://bitbucket.org/chromiumembedded/cef/wiki/ContributingWithGit.md#markdown-header-working-with-pull-requests |
@ultimamanu In cases where we're adding large amounts of new code in Chromium (e.g. the GLES2DecoderImpl::HandleNervCopyTextureToSharedHandle implementation) we should use the buildflag/feature capability instead of including that code directly in Chromium patch files. For example, add a new source file in CEF that provides the helper implementation, include that file from the "gles2_sources" target in gpu/command_buffer/service/BUILD.gn, and call that helper implementation from a minimal GLES2DecoderImpl::HandleNervCopyTextureToSharedHandle method implementation. For more info on this approach see the documentation at https://bitbucket.org/chromiumembedded/cef/src/master/libcef/features/BUILD.gn. |
@ultimamanu Can you also share some example client code (perhaps a modified version of cefclient) that sets up and renders the Direct3D handle that is provided to CEF via CefRenderHandler::GetSharedHandle? Thanks. |
Original comment by Mark Petersen (Bitbucket: Mark_Petersen). Very interested in these findings as i am sitting in somewhat of the same situation and want it to run as an OpenGL surface. So thank you very much for all your work and i am hoping you could, as Marshall Greenblatt stated, and make a PR for this so others more easily can test and contribute to it :) |
Original comment by Ole Dittmann (Bitbucket: oled, GitHub: oled). @emmanuel ROCHE: Just wanted to say that I am also very interested in this and your work is very much appreciated. We have exacly the same situation. A Direct3D Application where we would like to integrate CEF. But we experienced bad performance probably because of windowless rendering and (theoretically superfluous) copying of frame data via system memory and backwards. My latest test was with cef 3.3239 and I noticed a substancial performance improvement compared to previous tests with version 3.2526. Cef seems to use hardware compositing now also in windowless mode. But especially in higher resolutions (like 4k) performance is still bad compared to normal windowed mode. As far as I see cef always copies its frame data completely into a system memory bitmap before calling "OnPaint" even if only a small area of the frame is updated it makes no difference. Also it seems that it does not properly re-use memory frames (you get many new addresses), which might prevent automatic optimization by graphics hardware for faster copying. So I think your approach with the shared texture might give a HUGE performance boost for our case. And we would very much like to see it integrated into cef some day! |
Original comment by Renato Ciuffo (Bitbucket: renatoc8, GitHub: renatoc8). I believe Vulkan offers the ability to share a texture across different contexts. And I believe that we can share a texture across APIs (Direct3D, OpenGL) as well. If both of those statements are true, then I think we should focus on a solution that uses Vulkan instead of Direct3D to accomplish the sharing of textures between the render and browser processes. Using Vulkan would allow us to accomplish exactly what we're trying to accomplish, whilst still being cross-platform. |
Original comment by Alexander Guettler (Bitbucket: xforce, GitHub: xforce). Well you still need platform specific code somewhere, also I think it would be not the best idea to force some vulkan stuff on the user. |
Original comment by Renato Ciuffo (Bitbucket: renatoc8, GitHub: renatoc8). I agree that we shouldn't force Vulkan on the user, and I don't think we should force Direct3D on the user either. Both APIs should require user opt-in, as the Direct3D implementation currently does. Are there any technical reasons that would prevent a Vulkan solution from working, aside from not being supported by some users? I also vaguely remember reading somewhere that you're able to share an OpenGL context across different processes on Linux (and perhaps MacOS?), despite not being able to share a context on Windows. So that's another potential way to bring shared texture support to other platforms. Is anyone here familiar with this? |
FYI, Chromium is adding support for a Vulkan backend. You can find the related tracking bugs here: https://bugs.chromium.org/p/chromium/issues/list?q=label:Proj-Vulkanize |
Architectural changes to the CEF OSR implementation are being discussed currently in issue #2575. |
Original comment by Riku Palomäki (Bitbucket: riku_palomaki). I don't think the issue description is very accurate. You can use hardware acceleration for WebGL / 3D CSS etc, only compositing is tricky. Still with 3D acceleration + software compositing you can get pretty good performance even without shared texture support: |
Original comment by Alexander Guettler (Bitbucket: xforce, GitHub: xforce). @GPBeta You can share the underlying DXGI surface with a D3D9 context, see https://docs.microsoft.com/en-us/windows/desktop/direct3darticles/surface-sharing-between-windows-graphics-apis |
Original comment by Alexander Guettler (Bitbucket: xforce, GitHub: xforce). Oh you are talking about platform that don't even support D3D11 in the first place, I wonder what those are?! There are some drastic changes coming to OSR in the near future, which might give us some better support for actual hardware surface sharing, follow #2575 for the discussion, sorry for not reading your comment fully. |
Original comment by Joshua Chen (Bitbucket: gpbeta, GitHub: gpbeta). @xforce_dev Thank you for the responsive reply! |
Linux: Add OSR use_external_begin_frame support (see issue #1006) → <<cset a48e0720762a (bb)>> |
Linux: Add OSR use_external_begin_frame support (see issue #1006) → <<cset 2ff59af42954 (bb)>> |
Linux: Add OSR use_external_begin_frame support (see issue #1006) → <<cset ce74f0ae4dff (bb)>> |
Original comment by Romain Caire (Bitbucket: Romain Caire). Hi, I’d like to help implementing Accelerated Paint in OSR mode on Linux. Is there any work being done on this end I could look at ? |
Original comment by Alexander Guettler (Bitbucket: xforce, GitHub: xforce). As part of #2575/viz-implementation-for-osr I am investigating the possibility of supporting more platforms. As part of the last chromium update my WIP part of Viz OSR was merged without any shared surface, because the old implementation doesn’t work with Viz. I will be publishing a new early work PR sometime this week to add shared surface again for Windows and go from there. |
Original comment by Romain Caire (Bitbucket: Romain Caire). Hi @{557058:f07fe0c3-2eef-4563-993e-dcdb7d76e546} , Did you had some time to work on this ? Feel free to DM me on Twitter @RomainCaire if you need any help or If i can be any useful :slight_smile: |
Issue #3216 was marked as a duplicate of this issue. |
Issue #3263 is likely the best way to implement this for Linux (and possibly other platforms in the future). |
Original comment by Joshua Chen (Bitbucket: gpbeta, GitHub: gpbeta). Hi, again! Just a quick question: Can we create the staging texture with We want to make use of the shared texture provided by CEF3 with ANGLE backed by Direct 3D 11. So the only choice is After some digging into ANGLE’s source codes, we found that EGLint SwapChain11::resetOffscreenColorBuffer(const gl::Context *context,
int backbufferWidth,
int backbufferHeight)
{
...
// Fail if the offscreen texture is not renderable.
if ((offscreenTextureDesc.BindFlags & D3D11_BIND_RENDER_TARGET) == 0)
{
ERR() << "Could not use provided offscreen texture, texture not renderable.";
release();
return EGL_BAD_SURFACE;
}
...
} It’s both reasonable that a PBuffer should be render-able and a shared texture from CEF should be read-only, but I think it should be safe to make the staging texture render-able? |
Adds support for the OnAcceleratedPaint callback. Verified to work on macOS and Windows. Linux support is present but not implemented for cefclient, so it is not verified to work. To test: Run `cefclient --off-screen-rendering-enabled --shared-texture-enabled`
Original report by me.
Original issue 1006 created by magreenblatt on 2013-06-27T17:50:13.000Z:
CEF3 off-screen rendering does not currently support hardware acceleration. This means that some features like 3D CSS which require hardware acceleration do not currently work when using off-screen rendering.
See issue comment 51.8 for related comments.
The text was updated successfully, but these errors were encountered: