-
Notifications
You must be signed in to change notification settings - Fork 10
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Unified 2D Rendering #5
base: master
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think consistency is generally a goal we should pursue.
I'd like to know more about the drawbacks; can you elaborate on the existing point and eventualy add more? Have you looked into the code already to see if other changes are required internally?
I'll look more into that shortly. The main thing that could cause issues is that systems that work on both 3d and 2d transforms will need to either be split or manipulate both transform3d and 2d at once. |
What is the motivation for refactoring |
I think a mesh always have an albedo texture but not always the other texture types, so I made it to reflect that. In the case where you use an alternative 3d pass, for example Wireframe, you can omit the texture and use a Wireframe tag component. The whole concept is that instead of having a PassSelector component, its the attached components that determine the behavior. If you have PbmMaterial and TextureHandle, you draw using the pbm pass. If you don't have PbmMaterial but you have TextureHandle, you use DrawFlat. |
I'm doing quick read and writing here some statements and questions.
|
BTW the thing you absolutely need to draw 100_000 of something is instanced drawing. |
@Jojolepro if so we should have separate materials for all passes, but that will also make it harder to swap passes during development. |
@Rhuagh you are right, but the number of passes will go down by a lot with this rfc. All 2d passes will be merged into two passes (lit and flat). 3d stays with the same passes. I don't want to touch 3d right now, but I imagine something like: DrawFlat:
DrawShaded:
DrawPbm:
|
@omni-viral
There won't be a
I haven't even used nalgebra once since it was merge, but I'll look into it. :)
Yeah its a tag component (NullStorage). Normally, you don't want to join over every single entity that has a If that's what you want to do, you can always use AntiStorage: (&transforms2d, &!screenspace).join() // Joins over transforms 2d entities that don't have screenspace.
Not sure what to do about that exactly.
Technically, even if you have one Component for each effect (i.e It takes less space like this if you only use one or two effects at once, instead of having a big component for all effects with 20+ f32 inside.
Perfect! So it makes sense not allow custom 2d meshes, but use sprites combined with an alpha mask (feature discussed in the ui rfc.)
Discussed on discord. Mostly, rasterization will almost exclusively happen at the start of the scene, except for rapidly changing text (timers for example). However, gfx_glyph and other libraries already have individual character caching, which solves this issue. Thanks a lot for your feedback @omni-viral and @Rhuagh ! |
Shaded use the normal texture also i think? |
It seems to use Emission https://github.com/amethyst/amethyst/blob/43b70ea3afc52c92e425029ccb417972d93cda73/amethyst_renderer/src/pass/shaders/fragment/shaded.glsl#L41 In which case, the component should probably be |
While I applaud the effort to unify the rendering of multiple types of components to share more code and passes, I think some of the other assumptions in this RFC may come back to bite us. Specifically, I believe introducing the Transform2D as you have defined it will be very damaging.
I believe it's good to improve ergonomics but I don't think this is the right way to do it. Instead, I would recommend two things:
|
Rendering 2D always happens after rendering 3D, no matter what. For reference, I explicitly wrote the Transform2D definition here: https://github.com/amethyst/rfcs/pull/5/files#diff-80f13f763d9f72432e39778b4b8a9494R189
I have taken this into account when designing the rfc. While you could in theory use the standard approach to draw text in the 3d space, it doesn't work well with layouting. The current solution is that when you want 3d ui, you first draw it on a texture, which is then put onto a 3d mesh (flat plane) in the world. You will also be able to trigger UiEvents using raycasts in the 3d world (I saw a game do that, its pretty cool). This is part of the bigger ui rfc, which I am still working on. More details coming soon ;)
The same way I mentionned in the previous point. It will be on a 3d mesh following the player with a low rendering priority (also something that will require an rfc). Low rendering priorities is what is used to render objects over other objects, even when the depth doesn't make sense. I.e a gun clipping through a wall in an fps game will still be rendered on top (roblox excluded).
I assumed lights could only come from the side in 2 dimensions. Was it a wrong assumption?
That's a good idea, but it was already discussed previously. There wasn't a concensus, on whether it was a good idea or not, but no one has a way to go forward with it without making performance and usability worse. Thanks for the comments! |
Also of note: I'm not 100% satisfied with the current definition of Transform2D. I'm trying to find if I can cut more data off it somehow. Its a bit tedious when you take layouting and parenting into account. I'm still opened to proposition for that type. |
Sorry, I misread the position vector. Since the renderer supports 3D I don't see a technical reason to disregard Z and what is effectively the 3D world position of a 2D element. I believe mixing UI/Text/Images with 3D is super important to enable users to create more compelling 2D experiences. I think an interesting proposition would be to see how we can use the 3D passes to render all 2D elements, while still providing the user with the experience they want. From a usability standpoint, users don't want to have to care about a third dimension or weird camera transforms when they start out but I think this can be solved with the IR.
Layouting can work fine in 3D if it only considers 2 dimensions of the space defined by a parent. Sure, it would be weird to rotate something that has been laid out in 2 dimensions, but don't do that. :)
Even if a world is authored using 2D art, normal maps can still be generated and used to provide a 3D feeling if the engine provides 3D rendering. |
No. You don't need that either. You can just hardcode quad into vertex shader. |
I need more opinions on the Guide-Level Explanation -> Transform2D section. I want to know who is for and who is against separating Transform into Transform2D and Transform3D. Drawback: Here's the sample implementation of both 3D: pub struct Transform3D {
/// Translation + rotation value
iso: Isometry3<f32>,
/// Scale vector
scale: Vector3<f32>,
} 2D: pub struct Transform2D {
translation: Vector2<f32>,
layer: u32, // z coordinate, but faster. ++ layer = on top
rotation: f32, // Rad, anti-clockwise
dimension: Vector2<f32>, // All 2d elements have fixed dimensions. This is used by the renderer(read sprites and ui) and potentially by physics.
scale: Vector2<f32>, // Used to multiply your own dimension AND the ones of the child entities.
} If you are against separating, we will need to add a separated Component to all 2D entities named "Dimensions", which will hold the dimension value. The z coordinate and rotation will make less sense than in 3d, and will have worse performance. Let me know your vote with a 👍 👎 or with a comment. Thanks! |
@amethyst/render-devs please take a look at this. |
@Jojolepro I need strong motivation arguments. Motivation paragraph in RFC is ... I have no idea what you are talking about there :) |
I think a distinction between 2D and 3D elements in the form of a Dimensions is good, though I believe it would be cleaner to make Transform2D a strict superset of Transform in terms of data representation.
This will make it much easier to write code and tools that work on both types. |
@omni-viral To add a rendering feature, you need to add it into all 2d passes by copy pasting. The motivation here is to merge the components into more reusable ones so we can remove the copy paste. Less copy paste = less bugs and maintenance. |
@kabergstrom that's the alternative solution of adding a Dimensions component. What you are proposing involves the same drawbacks as my proposition, but without the advantages. |
If we separate them, we at least need a generic API for working with them. I think @Rhuagh has experience with this ;) |
Again. Why not use |
@Jojolepro From a tooling perspective, I disagree. Tools would most likely have an easier time to work with a superset of Transform than a very different data representation. I also think it's better to retain 3D rotations even when working with 2D because you may want to render these in a 3D world. And in my opinion, rendering UI to texture before rendering in-world is not acceptable for a number of use-cases, but especially for high-resolution displays due to the VRAM requirements of intermediate texture buffers. |
Oh, one more thing I thought about: Pivots. Since the Transform2D concept defines dimensions, it will be essential to be able to define a pivot point for rotations. |
I'd like to move to close this RFC. There are some interesting concepts I don't want to reject, but what I want to do is move to keep the current As for the UI, I'd be in favour of using the Reasoning:
|
@kabergstrom @Frizi @omni-viral Feature status: All of UV2D, ScreenSpace and Oveflow are technically tasks of the rendering team too with high priority (renderer-only changes). They are blockers to the work of the ui team, so they should be worked on as soon as possible. Is there anyone that wants to take the lead for those one, two or three of the last mentioned features? Also the rendering team lead should take note of the Material Effects and Material Pbm propositions, or explicitly dismiss them, as this RFC is now closed. Thanks everyone. |
This RFC proposes an architecture that converts user-friendly components into reusable pass-friendly components.
Rendered version: Rendered
Let me know of any questions you have. :)