-
-
Notifications
You must be signed in to change notification settings - Fork 3.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support for Mipmapped Textures #1685
Conversation
Improved the In this case, the texture is a procedurally-generated fractal, so it can be directly generated at any size. We can provide original source data for each mipmap level, rather than telling bevy to generate the mipmaps by down-scaling the base image. This produces prettier-looking results. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a nice feature. I'm not experienced enough with mips to feel qualified to review the actual implementation, though it looks reasonable to me, but I had a few other comments.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Besides the CPU implementation which will be slow this looks good to me! Ideally we would want a way of generating via the GPU and caching the results to disk or something similar, but this gives us mipmaps now which will vastly increase the quality of bevy's 3D rendering. 👍
No, ideally we support loading pre-generated mipmaps from the file formats that support them, like GLTF and DDS. And people should use those, and pre-generate their mipmaps. Any sort of mipmap generation in-bevy should be just a fallback. No matter how it is implemented, a generic texture rescaling is not going to provide correct / good looking results for all types of textures (for one example, consider normal maps). Implementing a "disk caching" feature sounds like unjustified complexity for something that should really only be used as fallback. Using the GPU would be nice, though, yeah. I wanted to get some sort of support going as quickly and easily as possible, so I decided to just use the I think GPU mipmap generation should be a separate PR. |
Could you add a |
That issue should already be closed; it was fixed with #1639 , which removed the error, to allow loading the GLTF files (but rendering them without mipmaps for the time being, until this PR is merged). |
Closing this PR, as it is for the old renderer, that will soon be obsolete. I will soon post a new PR for pipelined rendering. |
This PR adds support for using mipmapped textures in Bevy.
The
Texture
asset can now contain the data for multiple mipmap levels.When that is the case, multiple mipmap levels will be copied into the GPU to be used for rendering.
Runtime Generation
There is also a method to allow generating mipmap images at runtime, on the CPU, by downscaling the texture using the
image
crate.New example
Further, this PR adds a
texture_filtering
example, to showcase use of mipmaps and anisotropic filtering, and for visual comparison of different filtering modes. This new example is inspired by themipmaps
example in thewgpu-rs
repo.Other minor improvements
(very limited) GLTF support:
I did not have the time to look into how the GLTF format allows for providing and loading pre-generated mipmap images. This is left as future work.
For now (in this PR) I simply made the GLTF loader generate our own mipmaps at runtime, if the GLTF file specifies that a mipmapped texture sampling mode should be used.
This allows GLTF assets that request to be rendered with mipmaps (such as PBR models) to be rendered approximately correctly (or at least better than without mipmaps), using mipmaps generated in Bevy.
It is, however, certainly not the behavior that users would expect if they want to use pre-generated mipmaps with their assets. In particular, it would produce the wrong results if users want to provide mipmap images that are not (for whatever reason) simple downscaled versions of the base image. I suspect that simple naive rescaling of, say, normal maps, may not produce good results.
Also, doing it this way does not scale (obviously), because it wastes a lot of CPU for games with a lot of assets.
GLTF loader bug: different handling of embedded and externally-referenced textures!
Further limitation with the current GLTF loader: currently, the above only works with GLB files. This is because of the way Bevy handles GLTF files that reference external images (such as from PNGs), rather than embedding them in the GLB file.
Bevy simply queues up the external image file to be loaded as a separate texture asset, the same way as if it was directly loaded from the AssetServer. This totally disregards any information about the texture sampler in the GLTF file! This means no mipmaps.
This needs to be fixed in general, because this bug means that GLTF files will be rendered incorrectly and differently from GLB files.
Note that this affects the asset used in our official gltf example! Our example GLTF asset is an ascii/json GLTF file that references its textures from PNGs, meaning that Bevy will disregard all sampler configuration in the GLTF and render it incorrectly!
A separate related issue is that embedded images are loaded sequentially by the GLTF loader (no multithreading between them), whereas external images are loaded asynchronously / in parallel by the asset system. This means that GLB files with embedded images (the ones that work correctly with mipmaps!) load much slower.
Future work (TODO file issues!):
Now that mipmaps are supported and every Texture could potentially contain many images, a lot more resources may be used up, so it may be worthwhile to investigate potential optimizations.
Texture
asset) after being copied into the GPU, to not waste system RAM.write_texture
method)