-
-
Notifications
You must be signed in to change notification settings - Fork 97
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Implement Drawable Textures #7379
Comments
I think this supercedes #5272 |
Is there a way to support blitting an arbitrary Mesh object and/or expand the Vector2 points to allow for some vector3 data, such as vertices or normals. Clearly this wouldn't have a depth buffer or anything, just access to the mesh data. As an example, a baking process in which we want to paint from a source texture onto a target texture, using the mesh's UV layout. In this case, baking texture data might need a mesh's UV texture layout, but also with access to vertex, normal and tangent data. (And if 3D meshes are ruled out due to vertex layout constraints and performance tradeoffs, I could ask a related question for meshes in 2D mode which ought to be compatible with this vertex layout) |
@lyuma Yes, but to be honest, my way of thinking is that this is intended to just be a simple API, so if you want to do something considerably more complex, its best to just use RenderingDevice directly. |
If I understand it, this would supersede many devs' needs for texture partial updates and high bit viewports as we can now do partial updates directly on the texture. If so, do the same limitations that have hindered implementation of both of those hinder this? Can this be implemented faster than implementing the other two? Is this proposal just a nice looking API but still has to be figured out underneath, or is it something doable sooner rather than later? |
@TokisanGames It should not be hard to implement, but its probably the better way to go forward, as otherwise this means more and more hacks have to be piled up on Viewport, which it is not meant for. |
Sounds like it would allow to rework AnimatedTexture instead of removing it. |
Would this API be blocking or asynchronous? I think there are two cases where this can be used (at least from what I would need):
Or would the two still require completely different APIs? In fact it seems that either way it should not be synchronous until needed (like download back), but in the second case the time to run it matters more than the first case. |
Instead of limiting So, for example
where RID p_material should be a ShaderMaterial. This way you can define and give whatever arguments you need by calling I think both this and godotengine/godot#75436 are complementary and have different use cases, although they can overlap and many will be happy with just this API. For example, godotengine/godot#75436 would be very useful for batch rendering octahedral impostors in my current project during runtime. |
@bitsawer I think you misunderstood the proposal. If you want to do that, nobody stops you. That by sole definition of using a material in the API should work. The idea that you have the option to pass custom textures is for the very fact that it can get quite annoying to have to create a ShaderMaterial for every combination of textures you can pass. |
Sound good, the proposal just didn't directly mention any way to pass custom data other than textures and modulate so I was a bit worried. But if using a ShaderMaterial with it is possible when you need some extra parameters and customization, it sounds good to me. |
@Zylann Its synchronous, if you want to get the texture to CPU you can but you will stall the render until that part is done. Implementing asynchronous texture retrieval is planned, so eventually I guess that could work too. |
@reduz you might have misunderstood me: I meant that the actual rendering should likely not happen litterally at the moment the function is called, depending on the use case, because it can have terrible impact on performance (not sure why it would need to, outside of requesting its data on CPU, which does stall either way?). For tools it's probably fine, but not in a game if used for post-processing that happens every frame. Rather, I'd think rendering would be queued and executed at once with the rest, or specific time, and only stalled if the user wants to get the result as an image immediately after the call for example. Therefore why I posted my question. Even for tools in fact it could help: one reason I was interested in a more specialized API was when I implemented terrain erosion, I had to run 30 times the same shader on a texture, which had to take 30 frames. I don't actually need the image on CPU between each of them. With the drawable API I can do it in one frame, but if that's 30 synchronous calls, there will be multiplied overhead (that same overhead I refer to in case of post-processing, assuming that proposal can also be used for that use case). Although, regarding async download, that would be nice to have as well (and not just with textures). |
@Zylann oh yeah it wont happen at the time you call it |
@reduz Is your thinking that we would call an update function in the main Something like:
|
Let me tell you what my brain gets from this feature -- if i understand this right.... --OPTION 1-- we could have a texture like a 512x512 set of pixels then at runtime as a game progresses, draw or change RGBA values of a single pixel (or more like from XY to XY) in the texture based on events and stats of the game... IE: AOE or Zelda SNES.... (think: every tile in a 2D scene... or in a 3D scene, every flat 2D plane divided up into 2D tiles circa checkerboard but exists in a 3D scene as planes in 3D space... orrr voxels....) For instance: World Maps: take a player starting a game with game fog mode turned on (circa Age of Empires) as the villagers explore, more of the 'map' gets revealed (in the bottom right corner of the screen) Level Maps: In top-down Zelda, you can explore dungeons and different areas of the overworld. Different maps for 2D tile placements based on the values in RGBA... or even just RGB for R*G as 65,535 tiles with 0 as null... B as status of tile... maybe A as visible.... Why this would come in handy is saving and loading game data and the ability to map it or even design levels and world maps. but as png/bitmap/jpg/ or actual textures... --OPTION 2-- Draw damage and make use of 'decals' to add to existing textures on 3D models.. especially paintball mode heart --OPTION 3-- Make changes at runtime to all the various texture maps used as a material on a 3D model to change the properties such as emissive lighting, diffuse lighting, shinyness?, even make metals look like they turned into wood but because the model got hit with a "terraForm Gun" of sorts [[ is that a real weapon or did i just invent it? hey use it, i'll play your game especially if it a mars terraforming themed game :-D ]] --OPTION 4-- change terrain at runtime such as changing a yellow brick road into asphalt or a stone road. p.s. sorry this was so long |
@reduz How does this affect imagetextures that are updated from a cpu side image? For some effects it is useful to manipulate a bit image directly and then push that up to an imagetexture. In many cases, the region of the bitmap altered will be much smaller than the whole image. Godot only supports update of an imagetexture from an image source of the same size, where in other libraries a partial update of a texture from a smaller image is allowed. This suits effects like brush painting and dirty-rect updates of a large texture. Being able to draw directly to a texture would improve the update process, by using a set of smaller imagetextures and bliting them to the drawable texture when they are updated. What would be ideal is an image texture that supports partial update from smaller images, and can be drawn to as well. This would very neatly provide a mechanism for a texture to be modified by bitmap editing, gpu-side drawing, and also shaders. |
You don't have to, I think this entirely user driven, they will call the functions every frame with whatever they need to do.
Thats a different use case and should not be affected. Its kind of similar except you do the work on the CPU. |
@reduz In earlier comments you say that the rendering won't happen when the function is called. So when should rendering of the texture happen? It seems reasonable that |
Still related to my earlier post, I'd be curious to see if that order can be controlled, because it matters if this API would be usable for post-processing as well |
@clayjohn Oh I meant that it does not happen when the function is called in the sense that you can't get via CPU back the result immediately. The rendering API is called immediately when this function is called if render thread is on the main thread. |
One thing I'm wondering about, is this actually still needed? We have all the ingredients already to code this in GDScript with the access to RenderingDevice. For now Vulkan only but at some point we should do some work on compatibility to expose things there a little better as well. Have a look at godotengine/godot-demo-projects#938 that shows how you can render to a buffer using compute shaders and then use it with a |
@BastiaanOlij RenderingDevice isn't available on the compatibility renderer. At all. Nor will it ever be. This feature is renderer-agnostic. It is therefore more suitable for add-ons that don't want to limit their users. Or game devs looking for web export for browsers that exist today. Even if the compatibility renderer eventually exposes its own set of classes, it would still be very difficult to create a device-agnostic class like the above in GDScript. Detecting which classes are available, avoiding crashes due to missing class, and bridging all the tiny differences between the exposed interfaces would simply be too complex. |
@SlugFiller I know that, but there has been a long standing wish to expose more parts of the compatibility renderer in similar ways as we're right now doing with the RenderingDeviced based renderers. They are structurally different so you'll always end up implementing things separately if you want to support both renderers but exposing the ability to write your own shader passes on OpenGL is definately something we could investigate. |
There's nothing about the basic drawable textures concept that should depend on RenderingDevice specific functionality. Plus, I would also suggest that there's generally no need to use a full compute shader for these tasks: often a big triangle with vertex/fragment ought to suffice (and avoiding compute would make it much simpler to support both RD and compatibility). So I can see both approaches being useful for different usecases: Compute Shader + Texture2DRD for high-end processing; and this proposal's DrawableTexture otherwise, as a full-screen vert + frag. |
@lyuma you can do both raster and compute through RenderingDevice. But I get what you're saying, this API makes it much easier to do this, while using RD requires a bit more in depth knowledge (though that could be wrapped up in a plugin). |
I want to chime in and say that it would be useful to be able to render any mesh using methods similar to these. One use case would for example be to "bake" mesh data to texture according some shader. In that case, you'd use the UV as POSITION output:
Here, we bake normals to texture. This might be useful in case we need to do some operations in compute shader which require corresponding normals in texture to be available. More steps would likely be required for the final texture output (whatever that may be), like actually dilating the texture to cover the UV seams. This is not easy to set up in Godot currently. In Unity, you are able to set render target to RenderTexture, and invoke Graphics.DrawMeshNow() using provided shader, which proves very useful for a scenario such as this. Once texture has been acquired, you could blit it with some other shader to dilate or do other operations. |
Hmm, perhaps adding void texture_drawable_blit_mesh(const Vector<RID> &p_textures, const RID &p_mesh, const RID &p_skeleton, const Color &p_modulate, const Vector<RID>& p_source_textures, int p_to_mipmap=0); |
That looks good, but you want to be able to pass in a shader along with the mesh. Not just textures and color modulation. |
Ah, right, I missed the material parameter. This should be void texture_drawable_blit_mesh(const Vector<RID> &p_textures, RID p_mesh, RID p_skeleton, RID p_material, const Color &p_modulate, const Vector<RID>& p_source_textures, int p_to_mipmap=0); |
Can we please support single channel formats?
We're definitely missing these as we do a lot of effects with a single channel texture... say RA8, RAH, RAF? |
We also need RH, RF and integer based formats on these drawable textures, and in Texture and Image. Our terrain heights are painted with RF. We're painting color with RGBA8. We're painting textures using a bit packed, integer format control map interpreted through an RF texture. It's clunky. Shaders already support usampler and the back-end rendering device already supports integer formats. Image and Texture should be completed and support all common OpenGL formats. If Terrain3D and other projects are to support GPU based texture modification on all hardware we either need this proposal or access to the RenderingDevice for texture creation and modification in OpenGL / Compatibility mode. What is more likely in the future: Access to Rendering Device in OpenGL to create and modify uint textures? Or integer formats in Texture, Image, and DrawableTexture? |
From what I understood, the concern about available formats has to do with GL support. As such, the following should be available at a minimum (based on code here):
These are the formats for which support is expected on any device. So, naturally, they should all be supported. |
Those are some of the common formats supported for reading. The list is different for common formats supported for writing For OpenGL ES 3.0 the list is:
For Desktop GL it is:
Vulkan supports many more formats on top of these |
GLES has no writing to float or half? At least that's what I'm getting from the list. Also, what are the I do kinda agree that the formats should be the lowest common denominator. But it's still way more formats than the specified 4. (Also, I'm not seeing (Also, for reference: How many formats are gone if you add GLES2 to the mix as well?) |
The GL ES 2.0 spec requires the following formats be supported |
The OpenGLES 3.0 specification (pdf) lists supported texture formats with bit depths from 8 to 32 per 1-4 channels for float and integer on page 143-145 of this file (marked 130-132). The first list Clayjohn gave is p141 "Texture and renderbuffer color formats". The second list is part of the "texture-only color formats" Textures cannot be read in any format unless they can also be written to. So all of these formats should be writeable, though only half can be used as a renderbuffer, which we users can't use in Godot anyway. RGBAF is 32-bit per 4 channels or 128-bits, aka RGBA32F which is on the list.
Yes, signed and unsigned integer of various bit depths for specifically writing int data. They aren't "an alternative to float writing". Float writing is the clunky alternative when you want to write integers. |
Sorry, that's not what that means. Renderbuffer formats are formats you can render to with a shader. Texture only formats are formats you can upload from the CPU. You cannot render to texture-only formats. All the target formats for Drawable Textures need to be renderable, they cannot be texture-only formats |
Any news for this? i'm hyped the same as when i first saw this proposal! |
To my knowledge, nobody has started working on implementing this feature yet. It's still desired, so contributions are welcome 🙂 |
Describe the project you are working on
Godot
Describe the problem or limitation you are having in your project
One feature that is widely requested in Godot is the ability to easily have a texture and just draw to it (or even run a custom shader).
Examples of things you want to do:
Traditionally this has be done with Viewports, but the whole thing is still relatively limited, as you can only write to a single image, writing to alpha is more complex, and you can't run multiple writing iterations per frame (ping-pong).
While in Godot 4.0+ users can access compute and RenderingDevice, this is still hard and beyond the level of experience of what most users can do, plus it won't work on GLES3 (compatibility) renderer.
Describe the feature / enhancement and how it helps to overcome the problem or limitation
Ideally, it should be possible to expose a very simple API to do this via RenderingServer, and an higher level DrawableTexture2D exposed for regular usage.
Describe how your proposal will work, with code, pseudo-code, mock-ups, and/or diagrams
At low level, the following API should be exposed for RenderingServer:
And that's it! We also need to add a new type of shader in Godot: SHADER_TYPE_TEXTURE_BLIT
Example of how to use:
Then put in a material and use.
Higher Level Abstraction
A resource can be provided:
If this enhancement will not be used often, can it be worked around with a few lines of script?
N/A
Is there a reason why this should be core and not an add-on in the asset library?
N/A
The text was updated successfully, but these errors were encountered: