Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bevy Asset V2 #8624

Merged
merged 149 commits into from
Sep 7, 2023
Merged

Bevy Asset V2 #8624

merged 149 commits into from
Sep 7, 2023

Conversation

cart
Copy link
Member

@cart cart commented May 16, 2023

Bevy Asset V2 Proposal

Why Does Bevy Need A New Asset System?

Asset pipelines are a central part of the gamedev process. Bevy's current asset system is missing a number of features that make it non-viable for many classes of gamedev. After plenty of discussions and a long community feedback period, we've identified a number missing features:

  • Asset Preprocessing: it should be possible to "preprocess" / "compile" / "crunch" assets at "development time" rather than when the game starts up. This enables offloading expensive work from deployed apps, faster asset loading, less runtime memory usage, etc.
  • Per-Asset Loader Settings: Individual assets cannot define their own loaders that override the defaults. Additionally, they cannot provide per-asset settings to their loaders. This is a huge limitation, as many asset types don't provide all information necessary for Bevy inside the asset. For example, a raw PNG image says nothing about how it should be sampled (ex: linear vs nearest).
  • Asset .meta files: assets should have configuration files stored adjacent to the asset in question, which allows the user to configure asset-type-specific settings. These settings should be accessible during the pre-processing phase. Modifying a .meta file should trigger a re-processing / re-load of the asset. It should be possible to configure asset loaders from the meta file.
  • Processed Asset Hot Reloading: Changes to processed assets (or their dependencies) should result in re-processing them and re-loading the results in live Bevy Apps.
  • Asset Dependency Tracking: The current bevy_asset has no good way to wait for asset dependencies to load. It punts this as an exercise for consumers of the loader apis, which is unreasonable and error prone. There should be easy, ergonomic ways to wait for assets to load and block some logic on an asset's entire dependency tree loading.
  • Runtime Asset Loading: it should be (optionally) possible to load arbitrary assets dynamically at runtime. This necessitates being able to deploy and run the asset server alongside Bevy Apps on all platforms. For example, we should be able to invoke the shader compiler at runtime, stream scenes from sources like the internet, etc. To keep deployed binaries (and startup times) small, the runtime asset server configuration should be configurable with different settings compared to the "pre processor asset server".
  • Multiple Backends: It should be possible to load assets from arbitrary sources (filesystems, the internet, remote asset serves, etc).
  • Asset Packing: It should be possible to deploy assets in compressed "packs", which makes it easier and more efficient to distribute assets with Bevy Apps.
  • Asset Handoff: It should be possible to hold a "live" asset handle, which correlates to runtime data, without actually holding the asset in memory. Ex: it must be possible to hold a reference to a GPU mesh generated from a "mesh asset" without keeping the mesh data in CPU memory
  • Per-Platform Processed Assets: Different platforms and app distributions have different capabilities and requirements. Some platforms need lower asset resolutions or different asset formats to operate within the hardware constraints of the platform. It should be possible to define per-platform asset processing profiles. And it should be possible to deploy only the assets required for a given platform.

These features have architectural implications that are significant enough to require a full rewrite. The current Bevy Asset implementation got us this far, but it can take us no farther. This PR defines a brand new asset system that implements most of these features, while laying the foundations for the remaining features to be built.

Bevy Asset V2

Here is a quick overview of the features introduced in this PR.

  • Asset Preprocessing: Preprocess assets at development time into more efficient (and configurable) representations
    • Dependency Aware: Dependencies required to process an asset are tracked. If an asset's processed dependency changes, it will be reprocessed
    • Hot Reprocessing/Reloading: detect changes to asset source files, reprocess them if they have changed, and then hot-reload them in Bevy Apps.
    • Only Process Changes: Assets are only re-processed when their source file (or meta file) has changed. This uses hashing and timestamps to avoid processing assets that haven't changed.
    • Transactional and Reliable: Uses write-ahead logging (a technique commonly used by databases) to recover from crashes / forced-exits. Whenever possible it avoids full-reprocessing / only uncompleted transactions will be reprocessed. When the processor is running in parallel with a Bevy App, processor asset writes block Bevy App asset reads. Reading metadata + asset bytes is guaranteed to be transactional / correctly paired.
    • Portable / Run anywhere / Database-free: The processor does not rely on an in-memory database (although it uses some database techniques for reliability). This is important because pretty much all in-memory databases have unsupported platforms or build complications.
    • Configure Processor Defaults Per File Type: You can say "use this processor for all files of this type".
    • Custom Processors: The Processor trait is flexible and unopinionated. It can be implemented by downstream plugins.
    • LoadAndSave Processors: Most asset processing scenarios can be expressed as "run AssetLoader A, save the results using AssetSaver X, and then load the result using AssetLoader B". For example, load this png image using PngImageLoader, which produces an Image asset and then save it using CompressedImageSaver (which also produces an Image asset, but in a compressed format), which takes an Image asset as input. This means if you have an AssetLoader for an asset, you are already half way there! It also means that you can share AssetSavers across multiple loaders. Because CompressedImageSaver accepts Bevy's generic Image asset as input, it means you can also use it with some future JpegImageLoader.
  • Loader and Saver Settings: Asset Loaders and Savers can now define their own settings types, which are passed in as input when an asset is loaded / saved. Each asset can define its own settings.
  • Asset .meta files: configure asset loaders, their settings, enable/disable processing, and configure processor settings
  • Runtime Asset Dependency Tracking Runtime asset dependencies (ex: if an asset contains a Handle<Image>) are tracked by the asset server. An event is emitted when an asset and all of its dependencies have been loaded
  • Unprocessed Asset Loading: Assets do not require preprocessing. They can be loaded directly. A processed asset is just a "normal" asset with some extra metadata. Asset Loaders don't need to know or care about whether or not an asset was processed.
  • Async Asset IO: Asset readers/writers use async non-blocking interfaces. Note that because Rust doesn't yet support async traits, there is a bit of manual Boxing / Future boilerplate. This will hopefully be removed in the near future when Rust gets async traits.
  • Pluggable Asset Readers and Writers: Arbitrary asset source readers/writers are supported, both by the processor and the asset server.
  • Better Asset Handles
    • Single Arc Tree: Asset Handles now use a single arc tree that represents the lifetime of the asset. This makes their implementation simpler, more efficient, and allows us to cheaply attach metadata to handles. Ex: the AssetPath of a handle is now directly accessible on the handle itself!
    • Const Typed Handles: typed handles can be constructed in a const context. No more weird "const untyped converted to typed at runtime" patterns!
    • Handles and Ids are Smaller / Faster To Hash / Compare: Typed Handle<T> is now much smaller in memory and AssetId<T> is even smaller.
    • Weak Handle Usage Reduction: In general Handles are now considered to be "strong". Bevy features that previously used "weak Handle<T>" have been ported to AssetId<T>, which makes it statically clear that the features do not hold strong handles (while retaining strong type information). Currently Handle::Weak still exists, but it is very possible that we can remove that entirely.
  • Efficient / Dense Asset Ids: Assets now have efficient dense runtime asset ids, which means we can avoid expensive hash lookups. Assets are stored in Vecs instead of HashMaps. There are now typed and untyped ids, which means we no longer need to store dynamic type information in the ID for typed handles. "AssetPathId" (which was a nightmare from a performance and correctness standpoint) has been entirely removed in favor of dense ids (which are retrieved for a path on load)
  • Direct Asset Loading, with Dependency Tracking: Assets that are defined at runtime can still have their dependencies tracked by the Asset Server (ex: if you create a material at runtime, you can still wait for its textures to load). This is accomplished via the (currently optional) "asset dependency visitor" trait. This system can also be used to define a set of assets to load, then wait for those assets to load.
    • Async folder loading: Folder loading also uses this system and immediately returns a handle to the LoadedFolder asset, which means folder loading no longer blocks on directory traversals.
  • Improved Loader Interface: Loaders now have a specific "top level asset type", which makes returning the top-level asset simpler and statically typed.
  • Basic Image Settings and Processing: Image assets can now be processed into the gpu-friendly Basic Universal format. The ImageLoader now has a setting to define what format the image should be loaded as. Note that this is just a minimal MVP ... plenty of additional work to do here. To demo this, enable the basis-universal feature and turn on asset processing.
  • Simpler Audio Play / AudioSink API: Asset handle providers are cloneable, which means the Audio resource can mint its own handles. This means you can now do let sink_handle = audio.play(music) instead of let sink_handle = audio_sinks.get_handle(audio.play(music)). Note that this might still be replaced by bevy_audio: ECS-based API redesign #8424.
    Removed Handle Casting From Engine Features: Ex: FontAtlases no longer use casting between handle types

Using The New Asset System

Normal Unprocessed Asset Loading

By default the AssetPlugin does not use processing. It behaves pretty much the same way as the old system.

If you are defining a custom asset, first derive Asset:

#[derive(Asset)]
struct Thing {
    value: String,
}

Initialize the asset:

app.init_asset:<Thing>()

Implement a new AssetLoader for it:

#[derive(Default)]
struct ThingLoader;

#[derive(Serialize, Deserialize, Default)]
pub struct ThingSettings {
    some_setting: bool,
}

impl AssetLoader for ThingLoader {
    type Asset = Thing;
    type Settings = ThingSettings;

    fn load<'a>(
        &'a self,
        reader: &'a mut Reader,
        settings: &'a ThingSettings,
        load_context: &'a mut LoadContext,
    ) -> BoxedFuture<'a, Result<Thing, anyhow::Error>> {
        Box::pin(async move {
            let mut bytes = Vec::new();
            reader.read_to_end(&mut bytes).await?;
            // convert bytes to value somehow
            Ok(Thing {
                value 
            })
        })
    }

    fn extensions(&self) -> &[&str] {
        &["thing"]
    }
}

Note that this interface will get much cleaner once Rust gets support for async traits. Reader is an async futures_io::AsyncRead. You can stream bytes as they come in or read them all into a Vec<u8>, depending on the context. You can use let handle = load_context.load(path) to kick off a dependency load, retrieve a handle, and register the dependency for the asset.

Then just register the loader in your Bevy app:

app.init_asset_loader::<ThingLoader>()

Now just add your Thing asset files into the assets folder and load them like this:

fn system(asset_server: Res<AssetServer>) {
    let handle = Handle<Thing> = asset_server.load("cool.thing");
}

You can check load states directly via the asset server:

if asset_server.load_state(&handle) == LoadState::Loaded { }

You can also listen for events:

fn system(mut events: EventReader<AssetEvent<Thing>>, handle: Res<SomeThingHandle>) {
    for event in events.iter() {
        if event.is_loaded_with_dependencies(&handle) {
        }
    }
}

Note the new AssetEvent::LoadedWithDependencies, which only fires when the asset is loaded and all dependencies (and their dependencies) have loaded.

Unlike the old asset system, for a given asset path all Handle<T> values point to the same underlying Arc. This means Handles can cheaply hold more asset information, such as the AssetPath:

// prints the AssetPath of the handle
info!("{:?}", handle.path())

Processed Assets

Asset processing can be enabled via the AssetPlugin. When developing Bevy Apps with processed assets, do this:

app.add_plugins(DefaultPlugins.set(AssetPlugin::processed_dev()))

This runs the AssetProcessor in the background with hot-reloading. It reads assets from the assets folder, processes them, and writes them to the .imported_assets folder. Asset loads in the Bevy App will wait for a processed version of the asset to become available. If an asset in the assets folder changes, it will be reprocessed and hot-reloaded in the Bevy App.

When deploying processed Bevy apps, do this:

app.add_plugins(DefaultPlugins.set(AssetPlugin::processed()))

This does not run the AssetProcessor in the background. It behaves like AssetPlugin::unprocessed(), but reads assets from .imported_assets.

When the AssetProcessor is running, it will populate sibling .meta files for assets in the assets folder. Meta files for assets that do not have a processor configured look like this:

(
    meta_format_version: "1.0",
    asset: Load(
        loader: "bevy_render::texture::image_loader::ImageLoader",
        settings: (
            format: FromExtension,
        ),
    ),
)

This is metadata for an image asset. For example, if you have assets/my_sprite.png, this could be the metadata stored at assets/my_sprite.png.meta. Meta files are totally optional. If no metadata exists, the default settings will be used.

In short, this file says "load this asset with the ImageLoader and use the file extension to determine the image type". This type of meta file is supported in all AssetPlugin modes. If in Unprocessed mode, the asset (with the meta settings) will be loaded directly. If in ProcessedDev mode, the asset file will be copied directly to the .imported_assets folder. The meta will also be copied directly to the .imported_assets folder, but with one addition:

(
    meta_format_version: "1.0",
    processed_info: Some((
        hash: 12415480888597742505,
        full_hash: 14344495437905856884,
        process_dependencies: [],
    )),
    asset: Load(
        loader: "bevy_render::texture::image_loader::ImageLoader",
        settings: (
            format: FromExtension,
        ),
    ),
)

processed_info contains hash (a direct hash of the asset and meta bytes), full_hash (a hash of hash and the hashes of all process_dependencies), and process_dependencies (the path and full_hash of every process_dependency). A "process dependency" is an asset dependency that is directly used when processing the asset. Images do not have process dependencies, so this is empty.

When the processor is enabled, you can use the Process metadata config:

(
    meta_format_version: "1.0",
    asset: Process(
        processor: "bevy_asset::processor::process::LoadAndSave<bevy_render::texture::image_loader::ImageLoader, bevy_render::texture::compressed_image_saver::CompressedImageSaver>",
        settings: (
            loader_settings: (
                format: FromExtension,
            ),
            saver_settings: (
                generate_mipmaps: true,
            ),
        ),
    ),
)

This configures the asset to use the LoadAndSave processor, which runs an AssetLoader and feeds the result into an AssetSaver (which saves the given Asset and defines a loader to load it with). (for terseness LoadAndSave will likely get a shorter/friendlier type name when Stable Type Paths lands). LoadAndSave is likely to be the most common processor type, but arbitrary processors are supported.

CompressedImageSaver saves an Image in the Basis Universal format and configures the ImageLoader to load it as basis universal. The AssetProcessor will read this meta, run it through the LoadAndSave processor, and write the basis-universal version of the image to .imported_assets. The final metadata will look like this:

(
    meta_format_version: "1.0",
    processed_info: Some((
        hash: 905599590923828066,
        full_hash: 9948823010183819117,
        process_dependencies: [],
    )),
    asset: Load(
        loader: "bevy_render::texture::image_loader::ImageLoader",
        settings: (
            format: Format(Basis),
        ),
    ),
)

To try basis-universal processing out in Bevy examples, (for example sprite.rs), change add_plugins(DefaultPlugins) to add_plugins(DefaultPlugins.set(AssetPlugin::processed_dev())) and run with the basis-universal feature enabled: cargo run --features=basis-universal --example sprite.

To create a custom processor, there are two main paths:

  1. Use the LoadAndSave processor with an existing AssetLoader. Implement the AssetSaver trait, register the processor using asset_processor.register_processor::<LoadAndSave<ImageLoader, CompressedImageSaver>>(image_saver.into()).
  2. Implement the Process trait directly and register it using: asset_processor.register_processor(thing_processor).

You can configure default processors for file extensions like this:

asset_processor.set_default_processor::<ThingProcessor>("thing")

There is one more metadata type to be aware of:

(
    meta_format_version: "1.0",
    asset: Ignore,
)

This will ignore the asset during processing / prevent it from being written to .imported_assets.

The AssetProcessor stores a transaction log at .imported_assets/log and uses it to gracefully recover from unexpected stops. This means you can force-quit the processor (and Bevy Apps running the processor in parallel) at arbitrary times!

.imported_assets is "local state". It should not be checked into source control. It should also be considered "read only". In practice, you can modify processed assets and processed metadata if you really need to test something. But those modifications will not be represented in the hashes of the assets, so the processed state will be "out of sync" with the source assets. The processor will not fix this for you. Either revert the change after you have tested it, or delete the processed files so they can be re-populated.

Open Questions

There are a number of open questions to be discussed. We should decide if they need to be addressed in this PR and if so, how we will address them:

Implied Dependencies vs Dependency Enumeration

There are currently two ways to populate asset dependencies:

  • Implied via AssetLoaders: if an AssetLoader loads an asset (and retrieves a handle), a dependency is added to the list.
  • Explicit via the optional Asset::visit_dependencies: if server.load_asset(my_asset) is called, it will call my_asset.visit_dependencies, which will grab dependencies that have been manually defined for the asset via the Asset trait impl (which can be derived).

This means that defining explicit dependencies is optional for "loaded assets". And the list of dependencies is always accurate because loaders can only produce Handles if they register dependencies. If an asset was loaded with an AssetLoader, it only uses the implied dependencies. If an asset was created at runtime and added with asset_server.load_asset(MyAsset), it will use Asset::visit_dependencies.

However this can create a behavior mismatch between loaded assets and equivalent "created at runtime" assets if Assets::visit_dependencies doesn't exactly match the dependencies produced by the AssetLoader. This behavior mismatch can be resolved by completely removing "implied loader dependencies" and requiring Asset::visit_dependencies to supply dependency data. But this creates two problems:

  • It makes defining loaded assets harder and more error prone: Devs must remember to manually annotate asset dependencies with #[dependency] when deriving Asset. For more complicated assets (such as scenes), the derive likely wouldn't be sufficient and a manual visit_dependencies impl would be required.
  • Removes the ability to immediately kick off dependency loads: When AssetLoaders retrieve a Handle, they also immediately kick off an asset load for the handle, which means it can start loading in parallel before the asset finishes loading. For large assets, this could be significant. (although this could be mitigated for processed assets if we store dependencies in the processed meta file and load them ahead of time)

Eager ProcessorDev Asset Loading

I made a controversial call in the interest of fast startup times ("time to first pixel") for the "processor dev mode configuration". When initializing the AssetProcessor, current processed versions of unchanged assets are yielded immediately, even if their dependencies haven't been checked yet for reprocessing. This means that non-current-state-of-filesystem-but-previously-valid assets might be returned to the App first, then hot-reloaded if/when their dependencies change and the asset is reprocessed.

Is this behavior desirable? There is largely one alternative: do not yield an asset from the processor to the app until all of its dependencies have been checked for changes. In some common cases (load dependency has not changed since last run) this will increase startup time. The main question is "by how much" and is that slower startup time worth it in the interest of only yielding assets that are true to the current state of the filesystem. Should this be configurable? I'm starting to think we should only yield an asset after its (historical) dependencies have been checked for changes + processed as necessary, but I'm curious what you all think.

Paths Are Currently The Only Canonical ID / Do We Want Asset UUIDs?

In this implementation AssetPaths are the only canonical asset identifier (just like the previous Bevy Asset system and Godot). Moving assets will result in re-scans (and currently reprocessing, although reprocessing can easily be avoided with some changes). Asset renames/moves will break code and assets that rely on specific paths, unless those paths are fixed up.

Do we want / need "stable asset uuids"? Introducing them is very possible:

  1. Generate a UUID and include it in .meta files
  2. Support UUID in AssetPath
  3. Generate "asset indices" which are loaded on startup and map UUIDs to paths.
    4 (maybe). Consider only supporting UUIDs for processed assets so we can generate quick-to-load indices instead of scanning meta files.

The main "pro" is that assets referencing UUIDs don't need to be migrated when a path changes. The main "con" is that UUIDs cannot be "lazily resolved" like paths. They need a full view of all assets to answer the question "does this UUID exist". Which means UUIDs require the AssetProcessor to fully finish startup scans before saying an asset doesnt exist. And they essentially require asset pre-processing to use in apps, because scanning all asset metadata files at runtime to resolve a UUID is not viable for medium-to-large apps. It really requires a pre-generated UUID index, which must be loaded before querying for assets.

I personally think this should be investigated in a separate PR. Paths aren't going anywhere ... everyone uses filesystems (and filesystem-like apis) to manage their asset source files. I consider them permanent canonical asset information. Additionally, they behave well for both processed and unprocessed asset modes. Given that Bevy is supporting both, this feels like the right canonical ID to start with. UUIDS (and maybe even other indexed-identifier types) can be added later as necessary.

Folder / File Naming Conventions

All asset processing config currently lives in the .imported_assets folder. The processor transaction log is in .imported_assets/log. Processed assets are added to .imported_assets/Default, which will make migrating to processed asset profiles (ex: a .imported_assets/Mobile profile) a non-breaking change. It also allows us to create top-level files like .imported_assets/log without it being interpreted as an asset. Meta files currently have a .meta suffix. Do we like these names and conventions?

Should the AssetPlugin::processed_dev configuration enable watch_for_changes automatically?

Currently it does (which I think makes sense), but it does make it the only configuration that enables watch_for_changes by default.

Discuss on_loaded High Level Interface:

This PR includes a very rough "proof of concept" on_loaded system adapter that uses the LoadedWithDependencies event in combination with asset_server.load_asset dependency tracking to support this pattern

fn main() {
    App::new()
        .init_asset::<MyAssets>()
        .add_systems(Update, on_loaded(create_array_texture))
        .run();
}

#[derive(Asset, Clone)]
struct MyAssets {
    #[dependency]
    picture_of_my_cat: Handle<Image>,
    #[dependency]
    picture_of_my_other_cat: Handle<Image>,
}

impl FromWorld for ArrayTexture {
    fn from_world(world: &mut World) -> Self {
        picture_of_my_cat: server.load("meow.png"),
        picture_of_my_other_cat: server.load("meeeeeeeow.png"),
    }
}

fn spawn_cat(In(my_assets): In<MyAssets>, mut commands: Commands) {
    commands.spawn(SpriteBundle {
        texture: my_assets.picture_of_my_cat.clone(),  
        ..default()
    });
    
    commands.spawn(SpriteBundle {
        texture: my_assets.picture_of_my_other_cat.clone(),  
        ..default()
    });
}

The implementation is very rough. And it is currently unsafe because bevy_ecs doesn't expose some internals to do this safely from inside bevy_asset. There are plenty of unanswered questions like:

  • "do we add a Loadable" derive? (effectively automate the FromWorld implementation above)
  • Should MyAssets even be an Asset? (largely implemented this way because it elegantly builds on server.load_asset(MyAsset { .. }) dependency tracking).

We should think hard about what our ideal API looks like (and if this is a pattern we want to support). Not necessarily something we need to solve in this PR. The current on_loaded impl should probably be removed from this PR before merging.

Clarifying Questions

What about Assets as Entities?

This Bevy Asset V2 proposal implementation initially stored Assets as ECS Entities. Instead of AssetId<T> + the Assets<T> resource it used Entity as the asset id and Asset values were just ECS components. There are plenty of compelling reasons to do this:

  1. Easier to inline assets in Bevy Scenes (as they are "just" normal entities + components)
  2. More flexible queries: use the power of the ECS to filter assets (ex: Query<Mesh, With<Tree>>).
  3. Extensible. Users can add arbitrary component data to assets.
  4. Things like "component visualization tools" work out of the box to visualize asset data.

However Assets as Entities has a ton of caveats right now:

  • We need to be able to allocate entity ids without a direct World reference (aka rework id allocator in Entities ... i worked around this in my prototypes by just pre allocating big chunks of entities)
  • We want asset change events in addition to ECS change tracking ... how do we populate them when mutations can come from anywhere? Do we use Changed queries? This would require iterating over the change data for all assets every frame. Is this acceptable or should we implement a new "event based" component change detection option?
  • Reconciling manually created assets with asset-system managed assets has some nuance (ex: are they "loaded" / do they also have that component metadata?)
  • "how do we handle "static" / default entity handles" (ties in to the Entity Indices discussion: Entity Indices #8319). This is necessary for things like "built in" assets and default handles in things like SpriteBundle.
  • Storing asset information as a component makes it easy to "invalidate" asset state by removing the component (or forcing modifications). Ideally we have ways to lock this down (some combination of Rust type privacy and ECS validation)

In practice, how we store and identify assets is a reasonably superficial change (porting off of Assets as Entities and implementing dedicated storage + ids took less than a day). So once we sort out the remaining challenges the flip should be straightforward. Additionally, I do still have "Assets as Entities" in my commit history, so we can reuse that work. I personally think "assets as entities" is a good endgame, but it also doesn't provide significant value at the moment and it certainly isn't ready yet with the current state of things.

Why not Distill?

Distill is a high quality fully featured asset system built in Rust. It is very natural to ask "why not just use Distill?".

It is also worth calling out that for awhile, we planned on adopting Distill / I signed off on it.

However I think Bevy has a number of constraints that make Distill adoption suboptimal:

  • Architectural Simplicity:
    • Distill's processor requires an in-memory database (lmdb) and RPC networked API (using Cap'n Proto). Each of these introduces API complexity that increases maintenance burden and "code grokability". Ignoring tests, documentation, and examples, Distill has 24,237 lines of Rust code (including generated code for RPC + database interactions). If you ignore generated code, it has 11,499 lines.
    • Bevy builds the AssetProcessor and AssetServer using pluggable AssetReader/AssetWriter Rust traits with simple io interfaces. They do not necessitate databases or RPC interfaces (although Readers/Writers could use them if that is desired). Bevy Asset V2 (at the time of writing this PR) is 5,384 lines of Rust code (ignoring tests, documentation, and examples). Grain of salt: Distill does have more features currently (ex: Asset Packing, GUIDS, remote-out-of-process asset processor). I do plan to implement these features in Bevy Asset V2 and I personally highly doubt they will meaningfully close the 6115 lines-of-code gap.
    • This complexity gap (which while illustrated by lines of code, is much bigger than just that) is noteworthy to me. Bevy should be hackable and there are pillars of Distill that are very hard to understand and extend. This is a matter of opinion (and Bevy Asset V2 also has complicated areas), but I think Bevy Asset V2 is much more approachable for the average developer.
    • Necessary disclaimer: counting lines of code is an extremely rough complexity metric. Read the code and form your own opinions.
  • Optional Asset Processing: Not all Bevy Apps (or Bevy App developers) need / want asset preprocessing. Processing increases the complexity of the development environment by introducing things like meta files, imported asset storage, running processors in the background, waiting for processing to finish, etc. Distill requires preprocessing to work. With Bevy Asset V2 processing is fully opt-in. The AssetServer isn't directly aware of asset processors at all. AssetLoaders only care about converting bytes to runtime Assets ... they don't know or care if the bytes were pre-processed or not. Processing is "elegantly" (forgive my self-congratulatory phrasing) layered on top and builds on the existing Asset system primitives.
  • Direct Filesystem Access to Processed Asset State: Distill stores processed assets in a database. This makes debugging / inspecting the processed outputs harder (either requires special tooling to query the database or they need to be "deployed" to be inspected). Bevy Asset V2, on the other hand, stores processed assets in the filesystem (by default ... this is configurable). This makes interacting with the processed state more natural. Note that both Godot and Unity's new asset system store processed assets in the filesystem.
  • Portability: Because Distill's processor uses lmdb and RPC networking, it cannot be run on certain platforms (ex: lmdb is a non-rust dependency that cannot run on the web, some platforms don't support running network servers). Bevy should be able to process assets everywhere (ex: run the Bevy Editor on the web, compile + process shaders on mobile, etc). Distill does partially mitigate this problem by supporting "streaming" assets via the RPC protocol, but this is not a full solve from my perspective. And Bevy Asset V2 can (in theory) also stream assets (without requiring RPC, although this isn't implemented yet)

Note that I do still think Distill would be a solid asset system for Bevy. But I think the approach in this PR is a better solve for Bevy's specific "asset system requirements".

Doesn't async-fs just shim requests to "sync" std::fs? What is the point?

"True async file io" has limited / spotty platform support. async-fs (and the rust async ecosystem generally ... ex Tokio) currently use async wrappers over std::fs that offload blocking requests to separate threads. This may feel unsatisfying, but it does still provide value because it prevents our task pools from blocking on file system operations (which would prevent progress when there are many tasks to do, but all threads in a pool are currently blocking on file system ops).

Additionally, using async APIs for our AssetReaders and AssetWriters also provides value because we can later add support for "true async file io" for platforms that support it. And we can implement other "true async io" asset backends (such as networked asset io).

Draft TODO

  • Fill in missing filesystem event APIs: file removed event (which is expressed as dangling RenameFrom events in some cases), file/folder renamed event
  • Assets without loaders are not moved to the processed folder. This breaks things like referenced .bin files for GLTFs. This should be configurable per-non-asset-type.
  • Initial implementation of Reflect and FromReflect for Handle. The "deserialization" parity bar is low here as this only worked with static UUIDs in the old impl ... this is a non-trivial problem. Either we add a Handle::AssetPath variant that gets "upgraded" to a strong handle on scene load or we use a separate AssetRef type for Bevy scenes (which is converted to a runtime Handle on load). This deserves its own discussion in a different pr.
  • Populate read_asset_bytes hash when run by the processor (a bit of a special case .. when run by the processor the processed meta will contain the hash so we don't need to compute it on the spot, but we don't want/need to read the meta when run by the main AssetServer)
  • Delay hot reloading: currently filesystem events are handled immediately, which creates timing issues in some cases. For example hot reloading images can sometimes break because the image isn't finished writing. We should add a delay, likely similar to the implementation in this PR.
  • Port old platform-specific AssetIo implementations to the new AssetReader interface (currently missing Android and web)
  • Resolve on_loaded unsafety (either by removing the API entirely or removing the unsafe)
  • Runtime loader setting overrides
  • Remove remaining unwraps that should be error-handled. There are number of TODOs here
  • Pretty AssetPath Display impl
  • Document more APIs
  • Resolve spurious "reloading because it has changed" events (to repro run load_gltf with processed_dev())
  • load_dependency hot reloading currently only works for processed assets. If processing is disabled, load_dependency changes are not hot reloaded.
  • Replace AssetInfo dependency load/fail counters with loading_dependencies: HashSet<UntypedAssetId> to prevent reloads from (potentially) breaking counters. Storing this will also enable "dependency reloaded" events (see Next Steps)
  • Re-add filesystem watcher cargo feature gate (currently it is not optional)
  • Migration Guide
  • Changelog

Followup TODO

  • Replace "eager unchanged processed asset loading" behavior with "don't returned unchanged processed asset until dependencies have been checked".
  • Add true Ignore AssetAction that does not copy the asset to the imported_assets folder.
  • Finish "live asset unloading" (ex: free up CPU asset memory after uploading an image to the GPU), rethink RenderAssets, and port renderer features. The Assets collection uses Option<T> for asset storage to support its removal. (1) the Option might not actually be necessary ... might be able to just remove from the collection entirely (2) need to finalize removal apis
  • Try replacing the "channel based" asset id recycling with something a bit more efficient (ex: we might be able to use raw atomic ints with some cleverness)
  • Consider adding UUIDs to processed assets (scoped just to helping identify moved assets ... not exposed to load queries ... see Next Steps)
  • Store "last modified" source asset and meta timestamps in processed meta files to enable skipping expensive hashing when the file wasn't changed
  • Fix "slow loop" handle drop fix
  • Migrate to TypeName
  • Handle "loader preregistration". See allow asset loader pre-registration #9429

Next Steps

  • Configurable per-type defaults for AssetMeta: It should be possible to add configuration like "all png image meta should default to using nearest sampling" (currently this hard-coded per-loader/processor Settings::default() impls). Also see the "Folder Meta" bullet point.
  • Avoid Reprocessing on Asset Renames / Moves: See the "canonical asset ids" discussion in Open Questions and the relevant bullet point in Draft TODO. Even without canonical ids, folder renames could avoid reprocessing in some cases.
  • Multiple Asset Sources: Expand AssetPath to support "asset source names" and support multiple AssetReaders in the asset server (ex: webserver://some_path/image.png backed by an Http webserver AssetReader). The "default" asset reader would use normal some_path/image.png paths. Ideally this works in combination with multiple AssetWatchers for hot-reloading
  • Stable Type Names: this pr removes the TypeUuid requirement from assets in favor of std::any::type_name. This makes defining assets easier (no need to generate a new uuid / use weird proc macro syntax). It also makes reading meta files easier (because things have "friendly names"). We also use type names for components in scene files. If they are good enough for components, they are good enough for assets. And consistency across Bevy pillars is desirable. However, std::any::type_name is not guaranteed to be stable (although in practice it is). We've developed a stable type path to resolve this, which should be adopted when it is ready.
  • Command Line Interface: It should be possible to run the asset processor in a separate process from the command line. This will also require building a network-server-backed AssetReader to communicate between the app and the processor. We've been planning to build a "bevy cli" for awhile. This seems like a good excuse to build it.
  • Asset Packing: This is largely an additive feature, so it made sense to me to punt this until we've laid the foundations in this PR.
  • Per-Platform Processed Assets: It should be possible to generate assets for multiple platforms by supporting multiple "processor profiles" per asset (ex: compress with format X on PC and Y on iOS). I think there should probably be arbitrary "profiles" (which can be separate from actual platforms), which are then assigned to a given platform when generating the final asset distribution for that platform. Ex: maybe devs want a "Mobile" profile that is shared between iOS and Android. Or a "LowEnd" profile shared between web and mobile.
  • Versioning and Migrations: Assets, Loaders, Savers, and Processors need to have versions to determine if their schema is valid. If an asset / loader version is incompatible with the current version expected at runtime, the processor should be able to migrate them. I think we should try using Bevy Reflect for this, as it would allow us to load the old version as a dynamic Reflect type without actually having the old Rust type. It would also allow us to define "patches" to migrate between versions (Bevy Reflect devs are currently working on patching). The .meta file already has its own format version. Migrating that to new versions should also be possible.
  • Real Copy-on-write AssetPaths: Rust's actual Cow (clone-on-write type) currently used by AssetPath can still result in String clones that aren't actually necessary (cloning an Owned Cow clones the contents). Bevy's asset system requires cloning AssetPaths in a number of places, which result in actual clones of the internal Strings. This is not efficient. AssetPath internals should be reworked to exhibit truer cow-like-behavior that reduces String clones to the absolute minimum.
  • Consider processor-less processing: In theory the AssetServer could run processors "inline" even if the background AssetProcessor is disabled. If we decide this is actually desirable, we could add this. But I don't think its a priority in the short or medium term.
  • Pre-emptive dependency loading: We could encode dependencies in processed meta files, which could then be used by the Asset Server to kick of dependency loads as early as possible (prior to starting the actual asset load). Is this desirable? How much time would this save in practice?
  • Optimize Processor With UntypedAssetIds: The processor exclusively uses AssetPath to identify assets currently. It might be possible to swap these out for UntypedAssetIds in some places, which are smaller / cheaper to hash and compare.
  • One to Many Asset Processing: An asset source file that produces many assets currently must be processed into a single "processed" asset source. If labeled assets can be written separately they can each have their own configured savers and they could be loaded more granularly. Definitely worth exploring!
  • Automatically Track "Runtime-only" Asset Dependencies: Right now, tracking "created at runtime" asset dependencies requires adding them via asset_server.load_asset(StandardMaterial::default()). I think with some cleverness we could also do this for materials.add(StandardMaterial::default()), making tracking work "everywhere". There are challenges here relating to change detection / ensuring the server is made aware of dependency changes. This could be expensive in some cases.
  • "Dependency Changed" events: Some assets have runtime artifacts that need to be re-generated when one of their dependencies change (ex: regenerate a material's bind group when a Texture needs to change). We are generating the dependency graph so we can definitely produce these events. Buuuuut generating these events will have a cost / they could be high frequency for some assets, so we might want this to be opt-in for specific cases.
  • Investigate Storing More Information In Handles: Handles can now store arbitrary information, which makes it cheaper and easier to access. How much should we move into them? Canonical asset load states (via atomics)? (handle.is_loaded() would be very cool). Should we store the entire asset and remove the Assets<T> collection? (Arc<RwLock<Option<Image>>>?)
  • Support processing and loading files without extensions: This is a pretty arbitrary restriction and could be supported with very minimal changes.
  • Folder Meta: It would be nice if we could define per folder processor configuration defaults (likely in a .meta or .folder_meta file). Things like "default to linear filtering for all Images in this folder".
  • Replace async_broadcast with event-listener? This might be approximately drop-in for some uses and it feels more light weight
  • Support Running the AssetProcessor on the Web: Most of the hard work is done here, but there are some easy straggling TODOs (make the transaction log an interface instead of a direct file writer so we can write a web storage backend, implement an AssetReader/AssetWriter that reads/writes to something like LocalStorage).
  • Consider identifying and preventing circular dependencies: This is especially important for "processor dependencies", as processing will silently never finish in these cases.
  • Built-in/Inlined Asset Hot Reloading: This PR regresses "built-in/inlined" asset hot reloading (previously provided by the DebugAssetServer). I'm intentionally punting this because I think it can be cleanly implemented with "multiple asset sources" by registering a "debug asset source" (ex: debug://bevy_pbr/src/render/pbr.wgsl asset paths) in combination with an AssetWatcher for that asset source and support for "manually loading pats with asset bytes instead of AssetReaders". The old DebugAssetServer was quite nasty and I'd love to avoid that hackery going forward.
  • Investigate ways to remove double-parsing meta files: Parsing meta files currently involves parsing once with "minimal" versions of the meta file to extract the type name of the loader/processor config, then parsing again to parse the "full" meta. This is suboptimal. We should be able to define custom deserializers that (1) assume the loader/processor type name comes first (2) dynamically looks up the loader/processor registrations to deserialize settings in-line (similar to components in the bevy scene format). Another alternative: deserialize as dynamic Reflect objects and then convert.
  • More runtime loading configuration: Support using the Handle type as a hint to select an asset loader (instead of relying on AssetPath extensions)
  • More high level Processor trait implementations: For example, it might be worth adding support for arbitrary chains of "asset transforms" that modify an in-memory asset representation between loading and saving. (ex: load a Mesh, run a subdivide_mesh transform, followed by a flip_normals transform, then save the mesh to an efficient compressed format).
  • Bevy Scene Handle Deserialization: (see the relevant Draft TODO item for context)
  • Explore High Level Load Interfaces: See this discussion for one prototype.
  • Asset Streaming: It would be great if we could stream Assets (ex: stream a long video file piece by piece)
  • ID Exchanging: In this PR Asset Handles/AssetIds are bigger than they need to be because they have a Uuid enum variant. If we implement an "id exchanging" system that trades Uuids for "efficient runtime ids", we can cut down on the size of AssetIds, making them more efficient. This has some open design questions, such as how to spawn entities with "default" handle values (as these wouldn't have access to the exchange api in the current system).
  • Asset Path Fixup Tooling: Assets that inline asset paths inside them will break when an asset moves. The asset system provides the functionality to detect when paths break. We should build a framework that enables formats to define "path migrations". This is especially important for scene files. For editor-generated files, we should also consider using UUIDs (see other bullet point) to avoid the need to migrate in these cases.

Migration Guide

Migrating a custom asset loader

Existing asset loaders will need a few small changes to get them to work with Bevy Assets V2.

First, you'll need to add the asset type as an associated type of the loader. This type is called Asset and represents the type of the "default asset" produced by the loader.

You'll also need to add a Settings type which represents options that can be passed to the loader when you request an asset. If your asset has no settings, then you can just set it to the unit type.

pub struct MyAssetLoader;

impl AssetLoader for MyAssetLoader {
    type Asset = MyAsset;
    type Settings = ();

You'll need to make a couple small changes to the load function as well. The load function now takes a settings parameter whose type is, you guessed it, Settings:

    fn load<'a>(
        &'a self,
        reader: &'a mut Reader,
        settings: &'a Self::Settings,
        load_context: &'a mut LoadContext,
    ) -> BoxedFuture<'a, Result<Self::Asset, anyhow::Error>> {

Again, if you are not using settings, then you can just ignore the parameter (prefix it with "_").

Also, the second argument is now a reader rather than vector of bytes. If your existing code expects bytes, you can simply read the entire stream:

    fn load<'a>(
        &'a self,
        reader: &'a mut Reader,
        _settings: &'a Self::Settings,
        load_context: &'a mut LoadContext,
    ) -> BoxedFuture<'a, Result<Self::Asset, anyhow::Error>> {
        Box::pin(async move {
            let mut bytes = Vec::new();
            reader.read_to_end(&mut bytes).await?;

Finally, you'll need to write the code which returns the default asset. This used to be done via a call to load_context.set_default_asset(), however in V2 you simply return the asset from the load function:

    fn load<'a>(
        &'a self,
        reader: &'a mut Reader,
        _settings: &'a Self::Settings,
        load_context: &'a mut LoadContext,
    ) -> BoxedFuture<'a, Result<Self::Asset, anyhow::Error>> {
        Box::pin(async move {
            let mut bytes = Vec::new();
            reader.read_to_end(&mut bytes).await?;
        let mut asset: MyAsset =
            serde_json::from_slice(&bytes).expect("unable to decode asset");
        Ok(asset)
    }

To use the new loader, make sure you register both the loader and the asset type:

app.register_asset_loader(MyAssetLoader)
    .init_asset::<MyAsset>()

Labeled assets

If your loader allows labeled assets, there are a couple of different ways to handle them. The simplest is to call load_context.labeled_asset_scope:

// Assume `asset.children` is a HashMap or something.
// Using `drain` here so that we take ownership and don't end up with
// multiple references to the same asset.
asset.children.drain().for_each(|(label, mut item)| {
    load_context.labeled_asset_scope(label, |lc| {
        // Do any additional processing on the item
        // Use 'lc' to load dependencies
        item
    });
});

You can use the provided load context (lc) to load additional assets. These will automatically be registered as dependencies of the labeled asset.

Using assets

The actual call to load hasn't changed:

let handle = server.load("path/to/my/asset.json");

// ...

let data = assets.get(&handle).unwrap();

Asset events

There are a few changes to asset events. The event no longer contains a handle field, instead the event contains a field called id:

for ev in ev_template.read() {
    match ev {
        AssetEvent::Added { id } => {
            println!("Asset added");
        }
        
        AssetEvent::LoadedWithDependencies { id } => {
            println!("Asset loaded");
        }
        
        AssetEvent::Modified { id } => {
            println!("Asset modified");
        }

        AssetEvent::Removed { id } => {
            println!("Asset removed");
        }
    }
}

The id can be used to get access to the asset data, the asset's path or load status. Asset handles also contain an id field which can be used to compare for equality:

AssetEvent::Modified { id } => {
    for cmp in query.iter() {
       if cmp.handle.id() == id {
           println!("Found it!");
       }
    }
}

Also, as you may have noticed, the set of events has changed. The most important of these is LoadedWithDependencies which tells you that the asset and all its dependencies have finished loading into memory.

@cart cart marked this pull request as draft May 16, 2023 23:53
@alice-i-cecile alice-i-cecile added this to the 0.11 milestone May 16, 2023
@alice-i-cecile alice-i-cecile added A-Assets Load files from disk to use for things like images, models, and sounds X-Controversial There is active debate or serious implications around merging this PR C-Feature A new feature, making something new possible C-Usability A targeted quality-of-life change that makes Bevy easier to use labels May 16, 2023
@coreh
Copy link
Contributor

coreh commented May 17, 2023

🎉 This is very exciting

(A lot to digest, too, so I'll try reading it and providing feedback gradually as I go)


I know this is listed as an “open question” so it's likely just a strawperson/arbitrary choice for now, but I have huge reservations with going for a dot directory:(.imported_assets)

  • This won't have the desired effect on Windows, and will require additional file attributes to be truly hidden;
  • Having it hidden makes it harder to see via GUI file browsers in macOS and Linux, and many code editors;
  • Glob patterns will exclude it by default.

This means it's possible people will accidentally make botched game releases with missing preprocessed assets without realizing it, and it will either not work properly (if they disable preprocessing for the production build), or take longer to load and take extra disk space (if they don't). It's also possible people will make releases that include placeholder/development assets without realizing it, which could on a worst case scenario even leak game content on a demo/preview release.

@cart
Copy link
Member Author

cart commented May 17, 2023

@coreh

I know this is listed as an “open question” so it's likely just a strawperson/arbitrary choice for now, but I have huge reservations with going for a dot directory

Hmmm yeah I've definitely experienced weirdness with "dot things" in the past. Easy to accidentally miss. I'm pretty much convinced. I don't see much utility in it other than it logically "separating" it from the other directories. But even Rust stores things in target not .target. And gitignoring also visually changes folders in some IDES, which provides that separation.

Copy link
Contributor

@Neo-Zhixing Neo-Zhixing left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The PR description seems to suggest that having meta files adjacent to asset file is the only way to go. However, some people might prefer to have an in-memory db to store the metadata. You said here that "This is important because pretty much all in-memory databases have unsupported platforms or build complications." but it's a rather weak argument. Assuming one is willing to deal with non-rust dependency and does not target unsupported platforms, I'd like to know if there are other reasons why they shouldn't use in-memory db.

Now looking through the code I think people should be able to implement their own sqlite/imdb/whatever based storage solution fairly easily - they just need to implement the read_meta_bytes method in AssetReader trait. However, I'm unsure about how that would integrate with the custom write-ahead logging. If you can use a real transactional database, you won't need to maintain your own logging presumably?

I'd also like to learn more about the intended release process. In particular, how are we going to figure out what variants to include in the release? My understanding of AssetPlugin::processed_dev() is that the "processing" is going to take place on-demand when these assets were requested by the application. The processing results will then get stored in .imported_assets. However, .imported_assets is not checked into version control, so when somebody else / some CI server wants to make a release of the game, how are they going to populate .imported_assets without running through the entire game?


pub fn get_source_reader(&mut self, provider: &AssetProvider) -> Box<dyn AssetReader> {
match provider {
AssetProvider::Default => Box::new(FileAssetReader::new(self.default_file_source())),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The default asset provider should be configurable.

@markusstephanides
Copy link

Hi, awesome draft! Here a few comments and suggestions from some years of working with Unreal Engine:

  • It would be helpful to separate metadata of an asset and the actual source file. In Unreal Engine this is called "Virtual Assets" which lets the engine know of every asset that the game has including its type, size etc. but the actual content is lazy loaded (if you use Perforce then even "lazy synched") as soon as you need it. This helps reducing initial loading times but might increase loading times during gameplay which imo makes it a handy tool for developers that test on the same location over and over again but should not be enabled in a prod build. I thought that the .meta file would cover this but if I understood it correctly, it does not.
  • Modding: Current game engines that have asset preprocessing like Unity, Unreal make modding relatively hard. As a modder you either have to install the according engine to import the assets into a project, then export it in order to use them in the game, or you can only load raw, unoptimized, unlodded assets. I think the ideal case to handle this would be to make put the asset preprocessing stuff into a separate crate which can be used by game devs to create their "modding tool/processor" so modders can just drag their raw files in and get processed assets which they can use in the game.
  • Encryption: Common game engines provide possibilities to protect the shipped game assets to make it at least harder for people to get the raw assets. I think there is currently an external crate available which handles this for bevy. Do you want to keep it out of the asset system or would it maybe be better to include it into bevy?
  • Noop/Placeholder Mode: I'm not sure if currently any engine is able to handle this and not sure how much it would be part of the asset system but I would really love to have a mode which allows me to either completely disable loading of certain asset groups/assets with something like labels? so I can focus on productivity. For example if I want to test my game feature A, I don't need or want to have all sounds, textures loaded that are required for feature B. Instead for sounds it could maybe just not load the files but act like they were loaded. For meshes it would only need very low poly non-textured meshes with unique colors instead of high-poly full PBR meshes. In theory this would drastically reduce loading times and also accelerate the development iteration experience. But tbf I have no concrete idea of how this could really work.

I hope I have made some interesting points, would be happy to discuss them further if they sound reasonable for you!

@inodentry
Copy link
Contributor

Quick note (putting this here, before I take the time to think through this massive proposal more thoroughly):

I am in favor of entirely removing weak handles. The lightweight handle IDs serve that purpose (identify an asset, with no extra tracking/smart functionality) just fine. Even in the existing bevy_asset, I consider them redundant and smelly. It's just extra API complexity.

Let's just embrace "Use Handle for refcounted tracking, use AssetId for no tracking". It's simple and less confusing.

@inodentry
Copy link
Contributor

inodentry commented May 17, 2023

re: Meta file format versioning and migration

I think we must figure out a good story for migration from the get go. Do not postpone this!

To give an example of this done poorly, and why it is so important, let's consider Bevy scenes. The Bevy scene format has always been just "whatever the scene serializer + reflect serializer output, as implemented in your version of Bevy". This is bad. It makes it impossible to rely on Bevy scene asset files for non-trivial game projects, because it locks you to a specific Bevy version. Upgrading Bevy for an existing large project is painful enough if you only have code. You have to fix the hundreds of compiler errors and maybe refactor a few things, but you have the compiler to help you. Migrating scene files is next to impossible, there is no tooling (except editor macros), you have to recreate them. If you had decided to embrace a scene-based workflow for a bevy game project, and had already authored some scene assets for your game, tough luck. This means that people are in a dilemma: get stuck on an old Bevy version, or forego using scenes entirely. Shame, because Bevy promises to be "data-driven", yet one of its major "data-driven" features is infeasible to use in practice, and most devs avoid it.

You can't avoid assets. If asset meta files become the new way of accomplishing common tasks, which is what we want to do, then we must ensure they remain usable in practice and don't tie people's hands like that.

Now, the new asset meta files have a version field, unlike the scene format. This is a good start. But we must have the migration infrastructure in Bevy now, and not punt it for later, so that when we (inevitably) want to change the meta file format in the future, this isn't a problem.

Also, I should note, we should actually plan on / expect breaking changes, even as early as the next dev cycle after this gets merged. I've been with this project and community for long enough to know that new things don't see much usage and testing in the field until they make it to a release. Very few people use Bevy main. After the release is out, people start actually using the new stuff, and that's when all the bugs and usability issues get discovered. See: ScheduleV3 needed breaking changes almost immediately in the 0.10->0.11 dev cycle.

If this rework gets merged in 0.11, we might very likely want breaking changes for 0.12. We should be ready for it. We don't want to make Bevy users stuck on 0.11 or afraid to use asset meta files. We don't want Bevy engine devs stuck either; as soon as we discover things we want to improve, we should be able to just do it, not have it blocked on having to figure out "meta file migration".

@cart
Copy link
Member Author

cart commented May 17, 2023

@inodentry

I do think migrations are critical to nail down well before Bevy 0.12 (assuming meta files land in 0.11). Leaving users hanging here to fix things manually when they might have hundreds of assets is not an option. I promise this is not on the table at all.

I think we must figure out a good story for migration from the get go. Do not postpone this!

If by "do not postpone this" you mean "do not let 0.12 get remotely close before having migration automation for 0.11->0.12 ready" I totally agree. If you mean "do not merge this PR without migration automation" I very strongly disagree for a multitude of reasons:

  1. This PR is massive and already a challenge to review. Adding migration tooling on top is not a serious option. If anything I will break this PR into smaller pieces for the final round of review (ex: AssetServer and AssetProcessor have pretty clean API lines and could probably be broken up).
  2. Migration tooling is not an unknown design space, even though nice Bevy-native migration tooling is a relatively unexplored design space. For example if we can't build something like Theoretical Cool Migration System (see below) in time for Bevy 0.12, we can build a quick manual tool where we copy/paste the old versions of the core metadata rust types and loader settings, (ex: bevy_0_10::AssetMeta), deserialize old meta using those types, define a simple mapping interface between the old bevy_0_10::AssetMeta to the latest bevy_asset::AssetMeta version in Rust code, and serialize the results. This is reasonably simple to build and could still be pluggable to support third party metadata types.
  3. The TODO list for making the Bevy Asset V2 MVP viable at all is still long. I do agree that migrations are critical, but they are not the immediate priority. We don't need migration tooling if we don't have a new asset system.

Theoretical Cool Migration System

  • Add tooling that dumps schemas of Bevy meta types to some format (likely feeds on Bevy Reflect data)
  • Define a format for defining migration "patches" between versions of that schema (possibly implemented via the upcoming Bevy Reflect patching features currently being worked on)
  • Iterate all current meta files, check for low version numbers, and run migration patches if (automated in the AssetProcessor, a standalone bevy cli tool, both, etc)

@coreh
Copy link
Contributor

coreh commented May 17, 2023

IMO, Migrations can be made less painful by versioning specific fields/features, instead of the entire meta file. Then an asset meta could be using (for example) compression V3, Downscaling V2 and Depth-to-Normal conversion V5. Users only pay the cost of migrating the specific features they rely on, and we can keep them supported indefinitely since they're easy enough to plug in/out.

Re: The file serialization format, if possible I would prefer us to use (or support as an alternative format) something a little bit more widely adopted than RON, like JSON or JSON5. This has some verbosity cost but makes consuming and producing meta files from third party tools that are not written in Rust easier.

I also think, if possible, we should support a single top level meta file that applies settings to the entire directory.

@cart
Copy link
Member Author

cart commented May 17, 2023

@Neo-Zhixing

The PR description seems to suggest that having meta files adjacent to asset file is the only way to go. However, some people might prefer to have an in-memory db to store the metadata.

For those that do, they (or maybe we ... eventually) can implement in-memory DB AssetReaders/AssetWriters. I don't see anything preventing this. I also strongly believe that storing the "source" metadata in the filesystem adjacent to assets is the correct default. Hiding that away in a DB separate from the asset source file is pretty bad UX from my perspective. Making changes would be way harder. Source control would be a nightmare. You would need separate bespoke tooling to handle both of those cases. I think storing the "processed" asset and/or metadata in a DB is the more compelling story (but I don't think that should be the default ... see below).

You said here that "This is important because pretty much all in-memory databases have unsupported platforms or build complications." but it's a rather weak argument. Assuming one is willing to deal with non-rust dependency and does not target unsupported platforms, I'd like to know if there are other reasons why they shouldn't use in-memory db.

Assuming people are willing to deal with non-rust dependencies (which may or may not build on their platform by default) and not target unsupported platforms are both big assumptions. I'm sure plenty of people are ok with those constraints, but I strongly believe that by default Bevy should work everywhere. I think the current filesystem + write-ahead-logger approach is the correct default / first thing to target. It reduces dependencies and build complexity (more platforms supported out of the box, faster build times, higher likelihood of building on a given configuration). It also gives users easy / natural / direct access to viewing and debugging processed assets and their metadata. I don't see those arguments as weak at all.

That being said I'm happy to accommodate in-memory DB AssetReader/AssetWriter scenarios. Bevy Asset V2 builds on AssetReader/Writer precisely to support scenarios like this. If they can't accommodate that scenario, they likely can't accommodate other backends either (and if thats true why use a Trait at all). A 3rd party in-memory db backend would be a great stress test of that interface.

I'd also like to learn more about the intended release process. In particular, how are we going to figure out what variants to include in the release?

Currently it looks like this:

  1. Run the processor until it has finished (currently the only option is running the game with processed_dev until processing finishes, but there will be a cli tool too, as called out in the PR description)
  2. Deploy the .imported_assets folder with your game

However my plan is to have a more formal "deploy after processor is finished" process once "multiple processor profiles" and "Asset Packs" land. .imported_assets will contain all profiles (ex: Default, Mobile, Xbox, etc) instead of just Default like it does now. A deploy action will use some (configurable per deployment) logic to take assets from profiles to produce the final deployment. Some theoretical deploy configuration examples:

  1. Take all of the assets from the Web profile and store them in a single compressed Asset Pack
  2. Consider the assets in the Default profile, but overwrite them with the Mobile profile entries where relevant, put half of them in an Asset Pack and the other half without an asset pack.

Lots of design problems yet to solve in that space / thats definitely a "future work" thing.

My understanding of AssetPlugin::processed_dev() is that the "processing" is going to take place on-demand when these assets were requested by the application. The processing results will then get stored in .imported_assets. However, .imported_assets is not checked into version control, so when somebody else / some CI server wants to make a release of the game, how are they going to populate .imported_assets without running through the entire game?

They aren't processed "on demand" when requested (although we could probably support that scenario if we wanted to). You might be conflating my "processed assets yielded on demand" phrasing (as soon as a processed version of an asset is available for an asset load, it is loaded) with "must be processed on demand or it won't be processed at all" (which is not a feature that currently exists).

When the processor starts, it doesn't stop until it has fully reconciled the "source asset state" with the "processed asset state" to the best of its ability. It could just as easily be run as a command line tool that produces the final processed outputs. See the Next Steps bullet point about the bevy cli.

So the default answer is that when somebody wants a copy of .imported_assets, they should run the processor until it finishes (either via AssetPlugin::processed_dev or the future bevy cli).

In the future there will likely be demand for "remote artifact caches", but I see that as out of scope for the short/medium term (and fully compatible with the current direction of Bevy Asset V2).

If you can use a real transactional database, you won't need to maintain your own logging presumably?

Depends on the details of the implementation. But yeah I can imagine a scenario where this isn't necessary. As stated in the description, I'll be putting the write ahead logging behind a trait anyway. So you could just have a null "do nothing" impl in that case.

@inodentry

I am in favor of entirely removing weak handles.

Cool me too! (if you missed it I briefly mention this in the description) Pretty certain I've cleared the way for this. I'm down to explore it once the other more critical TODOs are addressed

@cart
Copy link
Member Author

cart commented May 18, 2023

@coreh

IMO, Migrations can be made less painful by versioning specific fields/features, instead of the entire meta file. Then an asset meta could be using (for example) compression V3, Downscaling V2 and Depth-to-Normal conversion V5.

This is already the plan (see my Migrations section in Next Steps). The metadata file version corresponds only to the core "structure" of the metadata file. Each loader/processor/saver will have its own version too.

Re: The file serialization format, if possible I would prefer us to use (or support as an alternative format) something a little bit more widely adopted than RON, like JSON or JSON5. This has some verbosity cost but makes consuming and producing meta files from third party tools that are not written in Rust easier.

Hmm I'm willing to consider this / see what everyone else thinks, but Bevy is Rust-first in pretty much every way. Ron makes the metadata file nicer to look at, terser, easier to edit, and more naturally mapped to the source rust types (such as the Process/Load/Ignore enum). They are a "bevy interface" in the same way that Rust code is a Bevy interface. Being forced to use untyped (or verbosely hacked-in-typed) JSON for configuring my Bevy assets when I know RON exists would make me pretty sad personally.

And theres also a chance that we will want it to be expressed as "bevy reflect serialization ron" in the interest of using the same patching / migration system for both bevy meta and bevy scene files. Another option is to support both JSON and RON (although I'm not in love with fracturing the ecosystem).

I also think, if possible, we should support a single top level meta file that applies settings to the entire directory.

Agreed! I already covered this in Next Steps

@cart
Copy link
Member Author

cart commented May 18, 2023

@markusstephanides

It would be helpful to separate metadata of an asset and the actual source file. In Unreal Engine this is called "Virtual Assets" which lets the engine know of every asset that the game has including its type, size etc. but the actual content is lazy loaded (if you use Perforce then even "lazy synched") as soon as you need it. This helps reducing initial loading times but might increase loading times during gameplay which imo makes it a handy tool for developers that test on the same location over and over again but should not be enabled in a prod build. I thought that the .meta file would cover this but if I understood it correctly, it does not.

We already do full lazy loading of asset contents and asset metadata (this is one of the main benefits of embracing paths as canonical ids). That being said, the processor empowers us to build arbitrary "global" views of assets (for example a UUID -> Asset Path mapping index). I think we can have the best of both worlds. If we really decide it would be beneficial to store metadata elsewhere (either in specific contexts or generally), we can definitely do that. The AssetReader/AssetWriter interface allows users to decide where their metadata is stored.

Modding: Current game engines that have asset preprocessing like Unity, Unreal make modding relatively hard. As a modder you either have to install the according engine to import the assets into a project, then export it in order to use them in the game, or you can only load raw, unoptimized, unlodded assets. I think the ideal case to handle this would be to make put the asset preprocessing stuff into a separate crate which can be used by game devs to create their "modding tool/processor" so modders can just drag their raw files in and get processed assets which they can use in the game.

The AssetProcessor is intentionally built in a way that can be deployed alongside games (this was one of the primary motivators for not using Distill ... see the PR description). It is also built with the intention to be run as a command line tool (see the PR description).

Encryption: Common game engines provide possibilities to protect the shipped game assets to make it at least harder for people to get the raw assets. I think there is currently an external crate available which handles this for bevy. Do you want to keep it out of the asset system or would it maybe be better to include it into bevy?

I see this as a "distribution" step similar to (or maybe even a part of) "asset packs" (see the Next Steps section and #8624 (comment)). I'm down to support this if there is enough demand. And I think its compatible with the current direction of Bevy Asset V2.

That being said I personally believe encryption in this context doesn't really solve the problem (and is therefore a fruitless waste of resources). In order for the game to render an asset, it needs the unencrypted form (and the key to decrypt it). The second someone with reasonably common binary hacking skills is interested, the assets can be decrypted. Imo this is just a waste of computing resources. And normal "compressed" versions of assets (ex: like what we already plan to do with Asset Packs) already obfuscates assets from average uninformed layperson (but what damage are they going to do?).

But yeah plenty of people still ask for this and I love making people happy.

Noop/Placeholder Mode: I'm not sure if currently any engine is able to handle this and not sure how much it would be part of the asset system but I would really love to have a mode which allows me to either completely disable loading of certain asset groups/assets with something like labels? so I can focus on productivity. For example if I want to test my game feature A, I don't need or want to have all sounds, textures loaded that are required for feature B. Instead for sounds it could maybe just not load the files but act like they were loaded. For meshes it would only need very low poly non-textured meshes with unique colors instead of high-poly full PBR meshes. In theory this would drastically reduce loading times and also accelerate the development iteration experience. But tbf I have no concrete idea of how this could really work.

This is an interesting thought. I think it could be accomplished via the existing system with a Processor impl:

  • create an empty asset file with the path/extension you intend to use (ex: sprite.png)
  • configure the meta file to use a PlaceholderImage processor (which ignores the contents of sprite.png and write a placeholder texture). The PlaceholderImage processor could have configurable settings (ex: width, height, color)
  • Whenever you add the "real" sprite.png, just delete the placeholder meta file and "real" image meta with the default image processor will be auto-populated when the processor runs.

This system can already be implemented with the existing interfaces and features.

@cart
Copy link
Member Author

cart commented May 18, 2023

@markusstephanides I realized I didn't fully grok / address an important part of the Virtual Assets suggestion: defining the "actual" path of an asset stored elsewhere. I do plan to support multiple asset sources (see Next Steps), which would include assets stored in arbitrary remote locations.

I think this largely amounts to a "redirect" from some local "foo.png" path to ""some_remote_asset_provider://foo.png".

I think it probably makes sense to support something like this (although I don't think it is an immediate priority). It could build on the existing/planned systems. The final deployed asset might look something like this:

(
    meta_format_version: "1.0",
    asset: LoadFromPath(
        path: "some_remote_asset_provider://foo.png"
    ),
)

Where the asset path follows the format suggested in Multiple Asset Sources in the Next Steps section of this PR.

Alternatively, you could just request "some_remote_asset_provider://foo.png" directly in scenes/code, but that isn't necessarily desirable in every situation. We can probably support both. But yeah probably a "future work" thing.

@B-Reif
Copy link
Contributor

B-Reif commented May 18, 2023

Ron makes the metadata file nicer to look at, terser, easier to edit, and more naturally mapped to the source rust types.

Fully agree that RON is the way forward. I don’t really see a third-party use case that would be able to consume Bevy’s metadata, but for some reason not be able to consume RON.

For the future, I’m interested to see how this will plug in to prefabs and other arbitrary serialized data.

Comment on lines 273 to 276
let mut context = self.begin_labeled_asset(label.clone());
let asset = load(&mut context);
let loaded_asset = context.finish(asset, None);
self.add_loaded_labeled_asset(loaded_asset)
Copy link
Contributor

@nicopap nicopap May 18, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This would benefit from a trace! scope. I think it could help for understanding loading time of individual asset. (either in tracy or just debuggability)

Also consider making load: FnOnce(&mut LoadContext) -> Result<A>. Handling errors otherwise would require returning dummy default values.

In bevy_mod_fbx. I'm considering doing that anyway, and accumulating errors (so that users can fix multiple errors at a time)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm down for more tracing!

load: FnOnce(&mut LoadContext) -> Result.

It would require making labeled_asset_scope result a Result too. Gotta consider the implications of that, but thats probably ok.

@inodentry
Copy link
Contributor

@cart re: migration

I agree with your points. I did not mean to demand that migration should be part of this PR. I am all for breaking up huge PRs into a series of smaller ones.

I was burned with scenes before. All I wanted to hear was that you are going to make a commitment towards ensuring users will have an upgrade path. I don't want us to make a Bevy release that has the new meta files, without being able to confidently make breaking changes as early as the next release and be prepared for supporting users through them.

You gave me that confirmation! So, thanks! My mind is at ease. :)

@Alainx277
Copy link

Alainx277 commented May 18, 2023

Are meta files always required for assets to be processed? I think using default meta information or folder meta files we could remove a lot of clutter. For example, a pull request for a Unity game has a meta file added for each asset, even if every asset of that type has the same configuration.

I would like to live without meta files at all, but I see how for example images and model files need additional information. If we can do this without having a meta file for every single asset, that would be a great win.

@cart
Copy link
Member Author

cart commented May 18, 2023

@Alainx277

Are meta files always required for assets to be processed? I think using default meta information or folder meta files we could remove a lot of clutter. For example, a pull request for a Unity game has a meta file added for each asset, even if every asset of that type has the same configuration.

Yeah I didn't call this out explicitly, but it was on my mind. Probably won't happen in this PR, but I'm very on board for having a user-configurable list of asset types to produce "source asset folder" meta files for. They are already optional so its really just adding a filter to the meta-population logic at this point.

@Neo-Zhixing
Copy link
Contributor

How is this going to work with dependencies? Let's say a crate contains some asset files, shaders etc that they want the users to be able to use directly. How can we include that into the asset search path?

@cart
Copy link
Member Author

cart commented Sep 13, 2023

Just added an initial Migration Guide written by @viridia. Thanks!

@ethereumdegen
Copy link
Contributor

Outstanding work!!

At first glance, I still am not a fan of how an asset loader binds itself to extension string like this


 fn extensions(&self) -> &[&str] {
        &["thing"]
    }

and an asset is loaded like this


fn system(asset_server: Res<AssetServer>) {
    let handle = Handle<Thing> = asset_server.load("cool.thing");
}

because we are relying on the extension to determine the loader which take A LOT of agency away from teh developer. I would rather it be more like

fn system(asset_server: Res<AssetServer>) {
    let handle = Handle<Thing> = my_file_loader.load("cool.thing");
}

or

fn system(asset_server: Res<AssetServer>) {
    let handle = Handle<Thing> = asset_server.load_with_loader("cool.thing","loadername");
}

(and I think this may doubly have the effect that we wouldnt even really need .meta files in most cases)

@ethereumdegen
Copy link
Contributor

I also have another comment for consideration. I am trying to load an Image that is inside of a Zip into the bevy asset server and I am having huge problems, it seems not possible actually. You see, LoadContext requires each Asset to have a Path but files that are inside of Zip files dont have a standard Path. From what i can tell, bevy needs to support 'pathless assets' for my dream to become a reality.

https://gist.github.com/ethereumdegen/f028b1072f8d91ccd8972bf846b4e189

am i wrong? Is there some way to actually load assets from a zip file ?

@viridia
Copy link
Contributor

viridia commented Sep 20, 2023

@ethereumdegen I think what you want to do is treat labels as paths.

A Bevy AssetPath has two parts: A "path" which identifies a specific file, and a "label" which specifies an asset contained within a file. The meaning of the label will vary depending on the structure of the file. For something like a .png file, the label is meaningless, and so is omitted. For a JSON or XML file, the label might reference the internal structure of the file via XPath or JSON Pointer syntax. For something like a Zip file, the label could mean the path to the compressed file within the archive.

The normal syntax for an asset path is <path>#<label>. The # character is a separator, and means exactly the same thing as it does in URLs, where the # indicates a fragment - that is, an internal label within a document.

One downside to this approach is that you have to deserialize everything - that is, because Bevy asset loaders operate on entire files, when you load your .zip file, you have to register every internal path, even the ones you aren't using. So you have to decompress everything, since you don't yet know which bits of the zip file are going to stay in memory after you finish parsing the zip file. Worse, if you later decide that you need another asset from the same .zip file, you'll have to read the whole file again. (I know that zip files are designed to allow a single entry to be unpacked, but this only works if you know which entry is needed - and the asset loader doesn't currently provide a way to say "I only need these parts of the file.")

However, this isn't the only possible approach. Bevy's access to the filesystem is via a plugin. You might be able to add a different plugin that treats the zip file as an alternate filesystem. However, for this you'd need some means to distinguish regular file paths from zip file paths. How you do that is up to you. This is an area that I am not an expert in.

@ethereumdegen
Copy link
Contributor

ethereumdegen commented Sep 20, 2023

@ethereumdegen I think what you want to do is treat labels as paths.

I was actually able to load GLTF from within a zip file like this -- had help in teh discord --- in this way i am treating labels/dependencies as paths :D

https://gist.github.com/ethereumdegen/37d160afd4f5e829c6143785d66bda38

@magras
Copy link

magras commented Sep 20, 2023

One downside to this approach is that you have to deserialize everything - that is, because Bevy asset loaders operate on entire files, when you load your .zip file, you have to register every internal path, even the ones you aren't using. So you have to decompress everything, since you don't yet know which bits of the zip file are going to stay in memory after you finish parsing the zip file. Worse, if you later decide that you need another asset from the same .zip file, you'll have to read the whole file again. (I know that zip files are designed to allow a single entry to be unpacked, but this only works if you know which entry is needed - and the asset loader doesn't currently provide a way to say "I only need these parts of the file.")

I believe you can pass an internal path to the loader via AssetLoader::Settings. Haven't had the opportunity to try it yet, though.

@nicopap
Copy link
Contributor

nicopap commented Sep 20, 2023

At least with the initial POC asset_v2, I was able to load zip files in my bvyfst plugin, which worked as a benchmark of what it is possible to do with asset_v2 https://github.com/nicopap/bvyfst though it's severly out of date now.


/// NOTE: changing the hashing logic here is a _breaking change_ that requires a [`META_FORMAT_VERSION`] bump.
pub(crate) fn get_asset_hash(meta_bytes: &[u8], asset_bytes: &[u8]) -> AssetHash {
let mut context = md5::Context::new();
Copy link

@quininer quininer Oct 14, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Even without considering security, using md5 is not a good choice. siphash is usually a better choice. siphash as short hash is faster than md5, has no known weaknesses and is more secure.

rustc also uses siphash for similar purposes. https://github.com/rust-lang/rust/blob/481d45abeced571b533016a994cba7337102a4a4/compiler/rustc_data_structures/src/stable_hasher.rs#L22

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

TBH the fact that not even the authors of SipHash are willing to call it cryptographically secure tells me its not.

But md5 is a known security weakness ancd must not be considered secure under FIPS.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, short hash output 64bit space, which is too small and cannot cryptographically secure. but it is not like md5 that has been broken.

Copy link

@minecrawler minecrawler Nov 6, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why does a hash algorithm in an asset system need to be cryptographically secure?

Copy link

@duaneking duaneking Nov 6, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why does a hash algorithm in an asset system need to be cryptographically secure?

For game security and anti-cheating.

The bevy engine supporting arbitrary injection of game asses is a security flaw that makes cheating in games via things like ESP easier, for example; And weak hashes on game assets allow hash collisions to replace arbitrary game assets to make that easier.

Copy link
Member

@mockersf mockersf Nov 6, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

github-merge-queue bot pushed a commit that referenced this pull request Oct 23, 2023
# Objective

- Replace md5 by another hasher, as suggested in
#8624 (comment)
- md5 is not secure, and is slow. use something more secure and faster

## Solution

- Replace md5 by blake3


Putting this PR in the 0.12 as once it's released, changing the hash
algorithm will be a painful breaking change
ameknite pushed a commit to ameknite/bevy that referenced this pull request Nov 6, 2023
# Objective

- Replace md5 by another hasher, as suggested in
bevyengine#8624 (comment)
- md5 is not secure, and is slow. use something more secure and faster

## Solution

- Replace md5 by blake3


Putting this PR in the 0.12 as once it's released, changing the hash
algorithm will be a painful breaking change
github-merge-queue bot pushed a commit that referenced this pull request Jan 3, 2024
# Objective
- No point in keeping Meshes/Images in RAM once they're going to be sent
to the GPU, and kept in VRAM. This saves a _significant_ amount of
memory (several GBs) on scenes like bistro.
- References
  - #1782
  - #8624 

## Solution
- Augment RenderAsset with the capability to unload the underlying asset
after extracting to the render world.
- Mesh/Image now have a cpu_persistent_access field. If this field is
RenderAssetPersistencePolicy::Unload, the asset will be unloaded from
Assets<T>.
- A new AssetEvent is sent upon dropping the last strong handle for the
asset, which signals to the RenderAsset to remove the GPU version of the
asset.

---

## Changelog
- Added `AssetEvent::NoLongerUsed` and
`AssetEvent::is_no_longer_used()`. This event is sent when the last
strong handle of an asset is dropped.
- Rewrote the API for `RenderAsset` to allow for unloading the asset
data from the CPU.
- Added `RenderAssetPersistencePolicy`.
- Added `Mesh::cpu_persistent_access` for memory savings when the asset
is not needed except for on the GPU.
- Added `Image::cpu_persistent_access` for memory savings when the asset
is not needed except for on the GPU.
- Added `ImageLoaderSettings::cpu_persistent_access`.
- Added `ExrTextureLoaderSettings`.
- Added `HdrTextureLoaderSettings`.

## Migration Guide
- Asset loaders (GLTF, etc) now load meshes and textures without
`cpu_persistent_access`. These assets will be removed from
`Assets<Mesh>` and `Assets<Image>` once `RenderAssets<Mesh>` and
`RenderAssets<Image>` contain the GPU versions of these assets, in order
to reduce memory usage. If you require access to the asset data from the
CPU in future frames after the GLTF asset has been loaded, modify all
dependent `Mesh` and `Image` assets and set `cpu_persistent_access` to
`RenderAssetPersistencePolicy::Keep`.
- `Mesh` now requires a new `cpu_persistent_access` field. Set it to
`RenderAssetPersistencePolicy::Keep` to mimic the previous behavior.
- `Image` now requires a new `cpu_persistent_access` field. Set it to
`RenderAssetPersistencePolicy::Keep` to mimic the previous behavior.
- `MorphTargetImage::new()` now requires a new `cpu_persistent_access`
parameter. Set it to `RenderAssetPersistencePolicy::Keep` to mimic the
previous behavior.
- `DynamicTextureAtlasBuilder::add_texture()` now requires that the
`TextureAtlas` you pass has an `Image` with `cpu_persistent_access:
RenderAssetPersistencePolicy::Keep`. Ensure you construct the image
properly for the texture atlas.
- The `RenderAsset` trait has significantly changed, and requires
adapting your existing implementations.
  - The trait now requires `Clone`.
- The `ExtractedAsset` associated type has been removed (the type itself
is now extracted).
  - The signature of `prepare_asset()` is slightly different
- A new `persistence_policy()` method is now required (return
RenderAssetPersistencePolicy::Unload to match the previous behavior).
- Match on the new `NoLongerUsed` variant for exhaustive matches of
`AssetEvent`.
rdrpenguin04 pushed a commit to rdrpenguin04/bevy that referenced this pull request Jan 9, 2024
# Bevy Asset V2 Proposal

## Why Does Bevy Need A New Asset System?

Asset pipelines are a central part of the gamedev process. Bevy's
current asset system is missing a number of features that make it
non-viable for many classes of gamedev. After plenty of discussions and
[a long community feedback
period](bevyengine#3972), we've
identified a number missing features:

* **Asset Preprocessing**: it should be possible to "preprocess" /
"compile" / "crunch" assets at "development time" rather than when the
game starts up. This enables offloading expensive work from deployed
apps, faster asset loading, less runtime memory usage, etc.
* **Per-Asset Loader Settings**: Individual assets cannot define their
own loaders that override the defaults. Additionally, they cannot
provide per-asset settings to their loaders. This is a huge limitation,
as many asset types don't provide all information necessary for Bevy
_inside_ the asset. For example, a raw PNG image says nothing about how
it should be sampled (ex: linear vs nearest).
* **Asset `.meta` files**: assets should have configuration files stored
adjacent to the asset in question, which allows the user to configure
asset-type-specific settings. These settings should be accessible during
the pre-processing phase. Modifying a `.meta` file should trigger a
re-processing / re-load of the asset. It should be possible to configure
asset loaders from the meta file.
* **Processed Asset Hot Reloading**: Changes to processed assets (or
their dependencies) should result in re-processing them and re-loading
the results in live Bevy Apps.
* **Asset Dependency Tracking**: The current bevy_asset has no good way
to wait for asset dependencies to load. It punts this as an exercise for
consumers of the loader apis, which is unreasonable and error prone.
There should be easy, ergonomic ways to wait for assets to load and
block some logic on an asset's entire dependency tree loading.
* **Runtime Asset Loading**: it should be (optionally) possible to load
arbitrary assets dynamically at runtime. This necessitates being able to
deploy and run the asset server alongside Bevy Apps on _all platforms_.
For example, we should be able to invoke the shader compiler at runtime,
stream scenes from sources like the internet, etc. To keep deployed
binaries (and startup times) small, the runtime asset server
configuration should be configurable with different settings compared to
the "pre processor asset server".
* **Multiple Backends**: It should be possible to load assets from
arbitrary sources (filesystems, the internet, remote asset serves, etc).
* **Asset Packing**: It should be possible to deploy assets in
compressed "packs", which makes it easier and more efficient to
distribute assets with Bevy Apps.
* **Asset Handoff**: It should be possible to hold a "live" asset
handle, which correlates to runtime data, without actually holding the
asset in memory. Ex: it must be possible to hold a reference to a GPU
mesh generated from a "mesh asset" without keeping the mesh data in CPU
memory
* **Per-Platform Processed Assets**: Different platforms and app
distributions have different capabilities and requirements. Some
platforms need lower asset resolutions or different asset formats to
operate within the hardware constraints of the platform. It should be
possible to define per-platform asset processing profiles. And it should
be possible to deploy only the assets required for a given platform.

These features have architectural implications that are significant
enough to require a full rewrite. The current Bevy Asset implementation
got us this far, but it can take us no farther. This PR defines a brand
new asset system that implements most of these features, while laying
the foundations for the remaining features to be built.

## Bevy Asset V2

Here is a quick overview of the features introduced in this PR.
* **Asset Preprocessing**: Preprocess assets at development time into
more efficient (and configurable) representations
* **Dependency Aware**: Dependencies required to process an asset are
tracked. If an asset's processed dependency changes, it will be
reprocessed
* **Hot Reprocessing/Reloading**: detect changes to asset source files,
reprocess them if they have changed, and then hot-reload them in Bevy
Apps.
* **Only Process Changes**: Assets are only re-processed when their
source file (or meta file) has changed. This uses hashing and timestamps
to avoid processing assets that haven't changed.
* **Transactional and Reliable**: Uses write-ahead logging (a technique
commonly used by databases) to recover from crashes / forced-exits.
Whenever possible it avoids full-reprocessing / only uncompleted
transactions will be reprocessed. When the processor is running in
parallel with a Bevy App, processor asset writes block Bevy App asset
reads. Reading metadata + asset bytes is guaranteed to be transactional
/ correctly paired.
* **Portable / Run anywhere / Database-free**: The processor does not
rely on an in-memory database (although it uses some database techniques
for reliability). This is important because pretty much all in-memory
databases have unsupported platforms or build complications.
* **Configure Processor Defaults Per File Type**: You can say "use this
processor for all files of this type".
* **Custom Processors**: The `Processor` trait is flexible and
unopinionated. It can be implemented by downstream plugins.
* **LoadAndSave Processors**: Most asset processing scenarios can be
expressed as "run AssetLoader A, save the results using AssetSaver X,
and then load the result using AssetLoader B". For example, load this
png image using `PngImageLoader`, which produces an `Image` asset and
then save it using `CompressedImageSaver` (which also produces an
`Image` asset, but in a compressed format), which takes an `Image` asset
as input. This means if you have an `AssetLoader` for an asset, you are
already half way there! It also means that you can share AssetSavers
across multiple loaders. Because `CompressedImageSaver` accepts Bevy's
generic Image asset as input, it means you can also use it with some
future `JpegImageLoader`.
* **Loader and Saver Settings**: Asset Loaders and Savers can now define
their own settings types, which are passed in as input when an asset is
loaded / saved. Each asset can define its own settings.
* **Asset `.meta` files**: configure asset loaders, their settings,
enable/disable processing, and configure processor settings
* **Runtime Asset Dependency Tracking** Runtime asset dependencies (ex:
if an asset contains a `Handle<Image>`) are tracked by the asset server.
An event is emitted when an asset and all of its dependencies have been
loaded
* **Unprocessed Asset Loading**: Assets do not require preprocessing.
They can be loaded directly. A processed asset is just a "normal" asset
with some extra metadata. Asset Loaders don't need to know or care about
whether or not an asset was processed.
* **Async Asset IO**: Asset readers/writers use async non-blocking
interfaces. Note that because Rust doesn't yet support async traits,
there is a bit of manual Boxing / Future boilerplate. This will
hopefully be removed in the near future when Rust gets async traits.
* **Pluggable Asset Readers and Writers**: Arbitrary asset source
readers/writers are supported, both by the processor and the asset
server.
* **Better Asset Handles**
* **Single Arc Tree**: Asset Handles now use a single arc tree that
represents the lifetime of the asset. This makes their implementation
simpler, more efficient, and allows us to cheaply attach metadata to
handles. Ex: the AssetPath of a handle is now directly accessible on the
handle itself!
* **Const Typed Handles**: typed handles can be constructed in a const
context. No more weird "const untyped converted to typed at runtime"
patterns!
* **Handles and Ids are Smaller / Faster To Hash / Compare**: Typed
`Handle<T>` is now much smaller in memory and `AssetId<T>` is even
smaller.
* **Weak Handle Usage Reduction**: In general Handles are now considered
to be "strong". Bevy features that previously used "weak `Handle<T>`"
have been ported to `AssetId<T>`, which makes it statically clear that
the features do not hold strong handles (while retaining strong type
information). Currently Handle::Weak still exists, but it is very
possible that we can remove that entirely.
* **Efficient / Dense Asset Ids**: Assets now have efficient dense
runtime asset ids, which means we can avoid expensive hash lookups.
Assets are stored in Vecs instead of HashMaps. There are now typed and
untyped ids, which means we no longer need to store dynamic type
information in the ID for typed handles. "AssetPathId" (which was a
nightmare from a performance and correctness standpoint) has been
entirely removed in favor of dense ids (which are retrieved for a path
on load)
* **Direct Asset Loading, with Dependency Tracking**: Assets that are
defined at runtime can still have their dependencies tracked by the
Asset Server (ex: if you create a material at runtime, you can still
wait for its textures to load). This is accomplished via the (currently
optional) "asset dependency visitor" trait. This system can also be used
to define a set of assets to load, then wait for those assets to load.
* **Async folder loading**: Folder loading also uses this system and
immediately returns a handle to the LoadedFolder asset, which means
folder loading no longer blocks on directory traversals.
* **Improved Loader Interface**: Loaders now have a specific "top level
asset type", which makes returning the top-level asset simpler and
statically typed.
* **Basic Image Settings and Processing**: Image assets can now be
processed into the gpu-friendly Basic Universal format. The ImageLoader
now has a setting to define what format the image should be loaded as.
Note that this is just a minimal MVP ... plenty of additional work to do
here. To demo this, enable the `basis-universal` feature and turn on
asset processing.
* **Simpler Audio Play / AudioSink API**: Asset handle providers are
cloneable, which means the Audio resource can mint its own handles. This
means you can now do `let sink_handle = audio.play(music)` instead of
`let sink_handle = audio_sinks.get_handle(audio.play(music))`. Note that
this might still be replaced by
bevyengine#8424.
**Removed Handle Casting From Engine Features**: Ex: FontAtlases no
longer use casting between handle types

## Using The New Asset System

### Normal Unprocessed Asset Loading

By default the `AssetPlugin` does not use processing. It behaves pretty
much the same way as the old system.

If you are defining a custom asset, first derive `Asset`:

```rust
#[derive(Asset)]
struct Thing {
    value: String,
}
```

Initialize the asset:
```rust
app.init_asset:<Thing>()
```

Implement a new `AssetLoader` for it:

```rust
#[derive(Default)]
struct ThingLoader;

#[derive(Serialize, Deserialize, Default)]
pub struct ThingSettings {
    some_setting: bool,
}

impl AssetLoader for ThingLoader {
    type Asset = Thing;
    type Settings = ThingSettings;

    fn load<'a>(
        &'a self,
        reader: &'a mut Reader,
        settings: &'a ThingSettings,
        load_context: &'a mut LoadContext,
    ) -> BoxedFuture<'a, Result<Thing, anyhow::Error>> {
        Box::pin(async move {
            let mut bytes = Vec::new();
            reader.read_to_end(&mut bytes).await?;
            // convert bytes to value somehow
            Ok(Thing {
                value 
            })
        })
    }

    fn extensions(&self) -> &[&str] {
        &["thing"]
    }
}
```

Note that this interface will get much cleaner once Rust gets support
for async traits. `Reader` is an async futures_io::AsyncRead. You can
stream bytes as they come in or read them all into a `Vec<u8>`,
depending on the context. You can use `let handle =
load_context.load(path)` to kick off a dependency load, retrieve a
handle, and register the dependency for the asset.

Then just register the loader in your Bevy app:

```rust
app.init_asset_loader::<ThingLoader>()
```

Now just add your `Thing` asset files into the `assets` folder and load
them like this:

```rust
fn system(asset_server: Res<AssetServer>) {
    let handle = Handle<Thing> = asset_server.load("cool.thing");
}
```

You can check load states directly via the asset server:

```rust
if asset_server.load_state(&handle) == LoadState::Loaded { }
```

You can also listen for events:

```rust
fn system(mut events: EventReader<AssetEvent<Thing>>, handle: Res<SomeThingHandle>) {
    for event in events.iter() {
        if event.is_loaded_with_dependencies(&handle) {
        }
    }
}
```

Note the new `AssetEvent::LoadedWithDependencies`, which only fires when
the asset is loaded _and_ all dependencies (and their dependencies) have
loaded.

Unlike the old asset system, for a given asset path all `Handle<T>`
values point to the same underlying Arc. This means Handles can cheaply
hold more asset information, such as the AssetPath:

```rust
// prints the AssetPath of the handle
info!("{:?}", handle.path())
```

### Processed Assets

Asset processing can be enabled via the `AssetPlugin`. When developing
Bevy Apps with processed assets, do this:

```rust
app.add_plugins(DefaultPlugins.set(AssetPlugin::processed_dev()))
```

This runs the `AssetProcessor` in the background with hot-reloading. It
reads assets from the `assets` folder, processes them, and writes them
to the `.imported_assets` folder. Asset loads in the Bevy App will wait
for a processed version of the asset to become available. If an asset in
the `assets` folder changes, it will be reprocessed and hot-reloaded in
the Bevy App.

When deploying processed Bevy apps, do this:

```rust
app.add_plugins(DefaultPlugins.set(AssetPlugin::processed()))
```

This does not run the `AssetProcessor` in the background. It behaves
like `AssetPlugin::unprocessed()`, but reads assets from
`.imported_assets`.

When the `AssetProcessor` is running, it will populate sibling `.meta`
files for assets in the `assets` folder. Meta files for assets that do
not have a processor configured look like this:

```rust
(
    meta_format_version: "1.0",
    asset: Load(
        loader: "bevy_render::texture::image_loader::ImageLoader",
        settings: (
            format: FromExtension,
        ),
    ),
)
```

This is metadata for an image asset. For example, if you have
`assets/my_sprite.png`, this could be the metadata stored at
`assets/my_sprite.png.meta`. Meta files are totally optional. If no
metadata exists, the default settings will be used.

In short, this file says "load this asset with the ImageLoader and use
the file extension to determine the image type". This type of meta file
is supported in all AssetPlugin modes. If in `Unprocessed` mode, the
asset (with the meta settings) will be loaded directly. If in
`ProcessedDev` mode, the asset file will be copied directly to the
`.imported_assets` folder. The meta will also be copied directly to the
`.imported_assets` folder, but with one addition:

```rust
(
    meta_format_version: "1.0",
    processed_info: Some((
        hash: 12415480888597742505,
        full_hash: 14344495437905856884,
        process_dependencies: [],
    )),
    asset: Load(
        loader: "bevy_render::texture::image_loader::ImageLoader",
        settings: (
            format: FromExtension,
        ),
    ),
)
```

`processed_info` contains `hash` (a direct hash of the asset and meta
bytes), `full_hash` (a hash of `hash` and the hashes of all
`process_dependencies`), and `process_dependencies` (the `path` and
`full_hash` of every process_dependency). A "process dependency" is an
asset dependency that is _directly_ used when processing the asset.
Images do not have process dependencies, so this is empty.

When the processor is enabled, you can use the `Process` metadata
config:

```rust
(
    meta_format_version: "1.0",
    asset: Process(
        processor: "bevy_asset::processor::process::LoadAndSave<bevy_render::texture::image_loader::ImageLoader, bevy_render::texture::compressed_image_saver::CompressedImageSaver>",
        settings: (
            loader_settings: (
                format: FromExtension,
            ),
            saver_settings: (
                generate_mipmaps: true,
            ),
        ),
    ),
)
```

This configures the asset to use the `LoadAndSave` processor, which runs
an AssetLoader and feeds the result into an AssetSaver (which saves the
given Asset and defines a loader to load it with). (for terseness
LoadAndSave will likely get a shorter/friendlier type name when [Stable
Type Paths](bevyengine#7184) lands). `LoadAndSave` is likely to be the most common
processor type, but arbitrary processors are supported.

`CompressedImageSaver` saves an `Image` in the Basis Universal format
and configures the ImageLoader to load it as basis universal. The
`AssetProcessor` will read this meta, run it through the LoadAndSave
processor, and write the basis-universal version of the image to
`.imported_assets`. The final metadata will look like this:

```rust
(
    meta_format_version: "1.0",
    processed_info: Some((
        hash: 905599590923828066,
        full_hash: 9948823010183819117,
        process_dependencies: [],
    )),
    asset: Load(
        loader: "bevy_render::texture::image_loader::ImageLoader",
        settings: (
            format: Format(Basis),
        ),
    ),
)
```

To try basis-universal processing out in Bevy examples, (for example
`sprite.rs`), change `add_plugins(DefaultPlugins)` to
`add_plugins(DefaultPlugins.set(AssetPlugin::processed_dev()))` and run
with the `basis-universal` feature enabled: `cargo run
--features=basis-universal --example sprite`.

To create a custom processor, there are two main paths:
1. Use the `LoadAndSave` processor with an existing `AssetLoader`.
Implement the `AssetSaver` trait, register the processor using
`asset_processor.register_processor::<LoadAndSave<ImageLoader,
CompressedImageSaver>>(image_saver.into())`.
2. Implement the `Process` trait directly and register it using:
`asset_processor.register_processor(thing_processor)`.

You can configure default processors for file extensions like this:

```rust
asset_processor.set_default_processor::<ThingProcessor>("thing")
```

There is one more metadata type to be aware of:

```rust
(
    meta_format_version: "1.0",
    asset: Ignore,
)
```

This will ignore the asset during processing / prevent it from being
written to `.imported_assets`.

The AssetProcessor stores a transaction log at `.imported_assets/log`
and uses it to gracefully recover from unexpected stops. This means you
can force-quit the processor (and Bevy Apps running the processor in
parallel) at arbitrary times!

`.imported_assets` is "local state". It should _not_ be checked into
source control. It should also be considered "read only". In practice,
you _can_ modify processed assets and processed metadata if you really
need to test something. But those modifications will not be represented
in the hashes of the assets, so the processed state will be "out of
sync" with the source assets. The processor _will not_ fix this for you.
Either revert the change after you have tested it, or delete the
processed files so they can be re-populated.

## Open Questions

There are a number of open questions to be discussed. We should decide
if they need to be addressed in this PR and if so, how we will address
them:

### Implied Dependencies vs Dependency Enumeration

There are currently two ways to populate asset dependencies:
* **Implied via AssetLoaders**: if an AssetLoader loads an asset (and
retrieves a handle), a dependency is added to the list.
* **Explicit via the optional Asset::visit_dependencies**: if
`server.load_asset(my_asset)` is called, it will call
`my_asset.visit_dependencies`, which will grab dependencies that have
been manually defined for the asset via the Asset trait impl (which can
be derived).

This means that defining explicit dependencies is optional for "loaded
assets". And the list of dependencies is always accurate because loaders
can only produce Handles if they register dependencies. If an asset was
loaded with an AssetLoader, it only uses the implied dependencies. If an
asset was created at runtime and added with
`asset_server.load_asset(MyAsset)`, it will use
`Asset::visit_dependencies`.

However this can create a behavior mismatch between loaded assets and
equivalent "created at runtime" assets if `Assets::visit_dependencies`
doesn't exactly match the dependencies produced by the AssetLoader. This
behavior mismatch can be resolved by completely removing "implied loader
dependencies" and requiring `Asset::visit_dependencies` to supply
dependency data. But this creates two problems:
* It makes defining loaded assets harder and more error prone: Devs must
remember to manually annotate asset dependencies with `#[dependency]`
when deriving `Asset`. For more complicated assets (such as scenes), the
derive likely wouldn't be sufficient and a manual `visit_dependencies`
impl would be required.
* Removes the ability to immediately kick off dependency loads: When
AssetLoaders retrieve a Handle, they also immediately kick off an asset
load for the handle, which means it can start loading in parallel
_before_ the asset finishes loading. For large assets, this could be
significant. (although this could be mitigated for processed assets if
we store dependencies in the processed meta file and load them ahead of
time)

### Eager ProcessorDev Asset Loading

I made a controversial call in the interest of fast startup times ("time
to first pixel") for the "processor dev mode configuration". When
initializing the AssetProcessor, current processed versions of unchanged
assets are yielded immediately, even if their dependencies haven't been
checked yet for reprocessing. This means that
non-current-state-of-filesystem-but-previously-valid assets might be
returned to the App first, then hot-reloaded if/when their dependencies
change and the asset is reprocessed.

Is this behavior desirable? There is largely one alternative: do not
yield an asset from the processor to the app until all of its
dependencies have been checked for changes. In some common cases (load
dependency has not changed since last run) this will increase startup
time. The main question is "by how much" and is that slower startup time
worth it in the interest of only yielding assets that are true to the
current state of the filesystem. Should this be configurable? I'm
starting to think we should only yield an asset after its (historical)
dependencies have been checked for changes + processed as necessary, but
I'm curious what you all think.

### Paths Are Currently The Only Canonical ID / Do We Want Asset UUIDs?

In this implementation AssetPaths are the only canonical asset
identifier (just like the previous Bevy Asset system and Godot). Moving
assets will result in re-scans (and currently reprocessing, although
reprocessing can easily be avoided with some changes). Asset
renames/moves will break code and assets that rely on specific paths,
unless those paths are fixed up.

Do we want / need "stable asset uuids"? Introducing them is very
possible:
1. Generate a UUID and include it in .meta files
2. Support UUID in AssetPath
3. Generate "asset indices" which are loaded on startup and map UUIDs to
paths.
4 (maybe). Consider only supporting UUIDs for processed assets so we can
generate quick-to-load indices instead of scanning meta files.

The main "pro" is that assets referencing UUIDs don't need to be
migrated when a path changes. The main "con" is that UUIDs cannot be
"lazily resolved" like paths. They need a full view of all assets to
answer the question "does this UUID exist". Which means UUIDs require
the AssetProcessor to fully finish startup scans before saying an asset
doesnt exist. And they essentially require asset pre-processing to use
in apps, because scanning all asset metadata files at runtime to resolve
a UUID is not viable for medium-to-large apps. It really requires a
pre-generated UUID index, which must be loaded before querying for
assets.

I personally think this should be investigated in a separate PR. Paths
aren't going anywhere ... _everyone_ uses filesystems (and
filesystem-like apis) to manage their asset source files. I consider
them permanent canonical asset information. Additionally, they behave
well for both processed and unprocessed asset modes. Given that Bevy is
supporting both, this feels like the right canonical ID to start with.
UUIDS (and maybe even other indexed-identifier types) can be added later
as necessary.

### Folder / File Naming Conventions

All asset processing config currently lives in the `.imported_assets`
folder. The processor transaction log is in `.imported_assets/log`.
Processed assets are added to `.imported_assets/Default`, which will
make migrating to processed asset profiles (ex: a
`.imported_assets/Mobile` profile) a non-breaking change. It also allows
us to create top-level files like `.imported_assets/log` without it
being interpreted as an asset. Meta files currently have a `.meta`
suffix. Do we like these names and conventions?

### Should the `AssetPlugin::processed_dev` configuration enable
`watch_for_changes` automatically?

Currently it does (which I think makes sense), but it does make it the
only configuration that enables watch_for_changes by default.

### Discuss on_loaded High Level Interface:

This PR includes a very rough "proof of concept" `on_loaded` system
adapter that uses the `LoadedWithDependencies` event in combination with
`asset_server.load_asset` dependency tracking to support this pattern

```rust
fn main() {
    App::new()
        .init_asset::<MyAssets>()
        .add_systems(Update, on_loaded(create_array_texture))
        .run();
}

#[derive(Asset, Clone)]
struct MyAssets {
    #[dependency]
    picture_of_my_cat: Handle<Image>,
    #[dependency]
    picture_of_my_other_cat: Handle<Image>,
}

impl FromWorld for ArrayTexture {
    fn from_world(world: &mut World) -> Self {
        picture_of_my_cat: server.load("meow.png"),
        picture_of_my_other_cat: server.load("meeeeeeeow.png"),
    }
}

fn spawn_cat(In(my_assets): In<MyAssets>, mut commands: Commands) {
    commands.spawn(SpriteBundle {
        texture: my_assets.picture_of_my_cat.clone(),  
        ..default()
    });
    
    commands.spawn(SpriteBundle {
        texture: my_assets.picture_of_my_other_cat.clone(),  
        ..default()
    });
}

```

The implementation is _very_ rough. And it is currently unsafe because
`bevy_ecs` doesn't expose some internals to do this safely from inside
`bevy_asset`. There are plenty of unanswered questions like:
* "do we add a Loadable" derive? (effectively automate the FromWorld
implementation above)
* Should `MyAssets` even be an Asset? (largely implemented this way
because it elegantly builds on `server.load_asset(MyAsset { .. })`
dependency tracking).

We should think hard about what our ideal API looks like (and if this is
a pattern we want to support). Not necessarily something we need to
solve in this PR. The current `on_loaded` impl should probably be
removed from this PR before merging.

## Clarifying Questions

### What about Assets as Entities?

This Bevy Asset V2 proposal implementation initially stored Assets as
ECS Entities. Instead of `AssetId<T>` + the `Assets<T>` resource it used
`Entity` as the asset id and Asset values were just ECS components.
There are plenty of compelling reasons to do this:
1. Easier to inline assets in Bevy Scenes (as they are "just" normal
entities + components)
2. More flexible queries: use the power of the ECS to filter assets (ex:
`Query<Mesh, With<Tree>>`).
3. Extensible. Users can add arbitrary component data to assets.
4. Things like "component visualization tools" work out of the box to
visualize asset data.

However Assets as Entities has a ton of caveats right now:
* We need to be able to allocate entity ids without a direct World
reference (aka rework id allocator in Entities ... i worked around this
in my prototypes by just pre allocating big chunks of entities)
* We want asset change events in addition to ECS change tracking ... how
do we populate them when mutations can come from anywhere? Do we use
Changed queries? This would require iterating over the change data for
all assets every frame. Is this acceptable or should we implement a new
"event based" component change detection option?
* Reconciling manually created assets with asset-system managed assets
has some nuance (ex: are they "loaded" / do they also have that
component metadata?)
* "how do we handle "static" / default entity handles" (ties in to the
Entity Indices discussion:
bevyengine#8319). This is necessary
for things like "built in" assets and default handles in things like
SpriteBundle.
* Storing asset information as a component makes it easy to "invalidate"
asset state by removing the component (or forcing modifications).
Ideally we have ways to lock this down (some combination of Rust type
privacy and ECS validation)

In practice, how we store and identify assets is a reasonably
superficial change (porting off of Assets as Entities and implementing
dedicated storage + ids took less than a day). So once we sort out the
remaining challenges the flip should be straightforward. Additionally, I
do still have "Assets as Entities" in my commit history, so we can reuse
that work. I personally think "assets as entities" is a good endgame,
but it also doesn't provide _significant_ value at the moment and it
certainly isn't ready yet with the current state of things.

### Why not Distill?

[Distill](https://github.com/amethyst/distill) is a high quality fully
featured asset system built in Rust. It is very natural to ask "why not
just use Distill?".

It is also worth calling out that for awhile, [we planned on adopting
Distill / I signed off on
it](bevyengine#708).

However I think Bevy has a number of constraints that make Distill
adoption suboptimal:
* **Architectural Simplicity:**
* Distill's processor requires an in-memory database (lmdb) and RPC
networked API (using Cap'n Proto). Each of these introduces API
complexity that increases maintenance burden and "code grokability".
Ignoring tests, documentation, and examples, Distill has 24,237 lines of
Rust code (including generated code for RPC + database interactions). If
you ignore generated code, it has 11,499 lines.
* Bevy builds the AssetProcessor and AssetServer using pluggable
AssetReader/AssetWriter Rust traits with simple io interfaces. They do
not necessitate databases or RPC interfaces (although Readers/Writers
could use them if that is desired). Bevy Asset V2 (at the time of
writing this PR) is 5,384 lines of Rust code (ignoring tests,
documentation, and examples). Grain of salt: Distill does have more
features currently (ex: Asset Packing, GUIDS, remote-out-of-process
asset processor). I do plan to implement these features in Bevy Asset V2
and I personally highly doubt they will meaningfully close the 6115
lines-of-code gap.
* This complexity gap (which while illustrated by lines of code, is much
bigger than just that) is noteworthy to me. Bevy should be hackable and
there are pillars of Distill that are very hard to understand and
extend. This is a matter of opinion (and Bevy Asset V2 also has
complicated areas), but I think Bevy Asset V2 is much more approachable
for the average developer.
* Necessary disclaimer: counting lines of code is an extremely rough
complexity metric. Read the code and form your own opinions.
* **Optional Asset Processing:** Not all Bevy Apps (or Bevy App
developers) need / want asset preprocessing. Processing increases the
complexity of the development environment by introducing things like
meta files, imported asset storage, running processors in the
background, waiting for processing to finish, etc. Distill _requires_
preprocessing to work. With Bevy Asset V2 processing is fully opt-in.
The AssetServer isn't directly aware of asset processors at all.
AssetLoaders only care about converting bytes to runtime Assets ... they
don't know or care if the bytes were pre-processed or not. Processing is
"elegantly" (forgive my self-congratulatory phrasing) layered on top and
builds on the existing Asset system primitives.
* **Direct Filesystem Access to Processed Asset State:** Distill stores
processed assets in a database. This makes debugging / inspecting the
processed outputs harder (either requires special tooling to query the
database or they need to be "deployed" to be inspected). Bevy Asset V2,
on the other hand, stores processed assets in the filesystem (by default
... this is configurable). This makes interacting with the processed
state more natural. Note that both Godot and Unity's new asset system
store processed assets in the filesystem.
* **Portability**: Because Distill's processor uses lmdb and RPC
networking, it cannot be run on certain platforms (ex: lmdb is a
non-rust dependency that cannot run on the web, some platforms don't
support running network servers). Bevy should be able to process assets
everywhere (ex: run the Bevy Editor on the web, compile + process
shaders on mobile, etc). Distill does partially mitigate this problem by
supporting "streaming" assets via the RPC protocol, but this is not a
full solve from my perspective. And Bevy Asset V2 can (in theory) also
stream assets (without requiring RPC, although this isn't implemented
yet)

Note that I _do_ still think Distill would be a solid asset system for
Bevy. But I think the approach in this PR is a better solve for Bevy's
specific "asset system requirements".

### Doesn't async-fs just shim requests to "sync" `std::fs`? What is the
point?

"True async file io" has limited / spotty platform support. async-fs
(and the rust async ecosystem generally ... ex Tokio) currently use
async wrappers over std::fs that offload blocking requests to separate
threads. This may feel unsatisfying, but it _does_ still provide value
because it prevents our task pools from blocking on file system
operations (which would prevent progress when there are many tasks to
do, but all threads in a pool are currently blocking on file system
ops).

Additionally, using async APIs for our AssetReaders and AssetWriters
also provides value because we can later add support for "true async
file io" for platforms that support it. _And_ we can implement other
"true async io" asset backends (such as networked asset io).

## Draft TODO

- [x] Fill in missing filesystem event APIs: file removed event (which
is expressed as dangling RenameFrom events in some cases), file/folder
renamed event
- [x] Assets without loaders are not moved to the processed folder. This
breaks things like referenced `.bin` files for GLTFs. This should be
configurable per-non-asset-type.
- [x] Initial implementation of Reflect and FromReflect for Handle. The
"deserialization" parity bar is low here as this only worked with static
UUIDs in the old impl ... this is a non-trivial problem. Either we add a
Handle::AssetPath variant that gets "upgraded" to a strong handle on
scene load or we use a separate AssetRef type for Bevy scenes (which is
converted to a runtime Handle on load). This deserves its own discussion
in a different pr.
- [x] Populate read_asset_bytes hash when run by the processor (a bit of
a special case .. when run by the processor the processed meta will
contain the hash so we don't need to compute it on the spot, but we
don't want/need to read the meta when run by the main AssetServer)
- [x] Delay hot reloading: currently filesystem events are handled
immediately, which creates timing issues in some cases. For example hot
reloading images can sometimes break because the image isn't finished
writing. We should add a delay, likely similar to the [implementation in
this PR](bevyengine#8503).
- [x] Port old platform-specific AssetIo implementations to the new
AssetReader interface (currently missing Android and web)
- [x] Resolve on_loaded unsafety (either by removing the API entirely or
removing the unsafe)
- [x]  Runtime loader setting overrides
- [x] Remove remaining unwraps that should be error-handled. There are
number of TODOs here
- [x] Pretty AssetPath Display impl
- [x] Document more APIs
- [x] Resolve spurious "reloading because it has changed" events (to
repro run load_gltf with `processed_dev()`)
- [x] load_dependency hot reloading currently only works for processed
assets. If processing is disabled, load_dependency changes are not hot
reloaded.
- [x] Replace AssetInfo dependency load/fail counters with
`loading_dependencies: HashSet<UntypedAssetId>` to prevent reloads from
(potentially) breaking counters. Storing this will also enable
"dependency reloaded" events (see [Next Steps](#next-steps))
- [x] Re-add filesystem watcher cargo feature gate (currently it is not
optional)
- [ ] Migration Guide
- [ ] Changelog

## Followup TODO

- [ ] Replace "eager unchanged processed asset loading" behavior with
"don't returned unchanged processed asset until dependencies have been
checked".
- [ ] Add true `Ignore` AssetAction that does not copy the asset to the
imported_assets folder.
- [ ] Finish "live asset unloading" (ex: free up CPU asset memory after
uploading an image to the GPU), rethink RenderAssets, and port renderer
features. The `Assets` collection uses `Option<T>` for asset storage to
support its removal. (1) the Option might not actually be necessary ...
might be able to just remove from the collection entirely (2) need to
finalize removal apis
- [ ] Try replacing the "channel based" asset id recycling with
something a bit more efficient (ex: we might be able to use raw atomic
ints with some cleverness)
- [ ] Consider adding UUIDs to processed assets (scoped just to helping
identify moved assets ... not exposed to load queries ... see [Next
Steps](#next-steps))
- [ ] Store "last modified" source asset and meta timestamps in
processed meta files to enable skipping expensive hashing when the file
wasn't changed
- [ ] Fix "slow loop" handle drop fix 
- [ ] Migrate to TypeName
- [x] Handle "loader preregistration". See bevyengine#9429

## Next Steps

* **Configurable per-type defaults for AssetMeta**: It should be
possible to add configuration like "all png image meta should default to
using nearest sampling" (currently this hard-coded per-loader/processor
Settings::default() impls). Also see the "Folder Meta" bullet point.
* **Avoid Reprocessing on Asset Renames / Moves**: See the "canonical
asset ids" discussion in [Open Questions](#open-questions) and the
relevant bullet point in [Draft TODO](#draft-todo). Even without
canonical ids, folder renames could avoid reprocessing in some cases.
* **Multiple Asset Sources**: Expand AssetPath to support "asset source
names" and support multiple AssetReaders in the asset server (ex:
`webserver://some_path/image.png` backed by an Http webserver
AssetReader). The "default" asset reader would use normal
`some_path/image.png` paths. Ideally this works in combination with
multiple AssetWatchers for hot-reloading
* **Stable Type Names**: this pr removes the TypeUuid requirement from
assets in favor of `std::any::type_name`. This makes defining assets
easier (no need to generate a new uuid / use weird proc macro syntax).
It also makes reading meta files easier (because things have "friendly
names"). We also use type names for components in scene files. If they
are good enough for components, they are good enough for assets. And
consistency across Bevy pillars is desirable. However,
`std::any::type_name` is not guaranteed to be stable (although in
practice it is). We've developed a [stable type
path](bevyengine#7184) to resolve this,
which should be adopted when it is ready.
* **Command Line Interface**: It should be possible to run the asset
processor in a separate process from the command line. This will also
require building a network-server-backed AssetReader to communicate
between the app and the processor. We've been planning to build a "bevy
cli" for awhile. This seems like a good excuse to build it.
* **Asset Packing**: This is largely an additive feature, so it made
sense to me to punt this until we've laid the foundations in this PR.
* **Per-Platform Processed Assets**: It should be possible to generate
assets for multiple platforms by supporting multiple "processor
profiles" per asset (ex: compress with format X on PC and Y on iOS). I
think there should probably be arbitrary "profiles" (which can be
separate from actual platforms), which are then assigned to a given
platform when generating the final asset distribution for that platform.
Ex: maybe devs want a "Mobile" profile that is shared between iOS and
Android. Or a "LowEnd" profile shared between web and mobile.
* **Versioning and Migrations**: Assets, Loaders, Savers, and Processors
need to have versions to determine if their schema is valid. If an asset
/ loader version is incompatible with the current version expected at
runtime, the processor should be able to migrate them. I think we should
try using Bevy Reflect for this, as it would allow us to load the old
version as a dynamic Reflect type without actually having the old Rust
type. It would also allow us to define "patches" to migrate between
versions (Bevy Reflect devs are currently working on patching). The
`.meta` file already has its own format version. Migrating that to new
versions should also be possible.
* **Real Copy-on-write AssetPaths**: Rust's actual Cow (clone-on-write
type) currently used by AssetPath can still result in String clones that
aren't actually necessary (cloning an Owned Cow clones the contents).
Bevy's asset system requires cloning AssetPaths in a number of places,
which result in actual clones of the internal Strings. This is not
efficient. AssetPath internals should be reworked to exhibit truer
cow-like-behavior that reduces String clones to the absolute minimum.
* **Consider processor-less processing**: In theory the AssetServer
could run processors "inline" even if the background AssetProcessor is
disabled. If we decide this is actually desirable, we could add this.
But I don't think its a priority in the short or medium term.
* **Pre-emptive dependency loading**: We could encode dependencies in
processed meta files, which could then be used by the Asset Server to
kick of dependency loads as early as possible (prior to starting the
actual asset load). Is this desirable? How much time would this save in
practice?
* **Optimize Processor With UntypedAssetIds**: The processor exclusively
uses AssetPath to identify assets currently. It might be possible to
swap these out for UntypedAssetIds in some places, which are smaller /
cheaper to hash and compare.
* **One to Many Asset Processing**: An asset source file that produces
many assets currently must be processed into a single "processed" asset
source. If labeled assets can be written separately they can each have
their own configured savers _and_ they could be loaded more granularly.
Definitely worth exploring!
* **Automatically Track "Runtime-only" Asset Dependencies**: Right now,
tracking "created at runtime" asset dependencies requires adding them
via `asset_server.load_asset(StandardMaterial::default())`. I think with
some cleverness we could also do this for
`materials.add(StandardMaterial::default())`, making tracking work
"everywhere". There are challenges here relating to change detection /
ensuring the server is made aware of dependency changes. This could be
expensive in some cases.
* **"Dependency Changed" events**: Some assets have runtime artifacts
that need to be re-generated when one of their dependencies change (ex:
regenerate a material's bind group when a Texture needs to change). We
are generating the dependency graph so we can definitely produce these
events. Buuuuut generating these events will have a cost / they could be
high frequency for some assets, so we might want this to be opt-in for
specific cases.
* **Investigate Storing More Information In Handles**: Handles can now
store arbitrary information, which makes it cheaper and easier to
access. How much should we move into them? Canonical asset load states
(via atomics)? (`handle.is_loaded()` would be very cool). Should we
store the entire asset and remove the `Assets<T>` collection?
(`Arc<RwLock<Option<Image>>>`?)
* **Support processing and loading files without extensions**: This is a
pretty arbitrary restriction and could be supported with very minimal
changes.
* **Folder Meta**: It would be nice if we could define per folder
processor configuration defaults (likely in a `.meta` or `.folder_meta`
file). Things like "default to linear filtering for all Images in this
folder".
* **Replace async_broadcast with event-listener?** This might be
approximately drop-in for some uses and it feels more light weight
* **Support Running the AssetProcessor on the Web**: Most of the hard
work is done here, but there are some easy straggling TODOs (make the
transaction log an interface instead of a direct file writer so we can
write a web storage backend, implement an AssetReader/AssetWriter that
reads/writes to something like LocalStorage).
* **Consider identifying and preventing circular dependencies**: This is
especially important for "processor dependencies", as processing will
silently never finish in these cases.
* **Built-in/Inlined Asset Hot Reloading**: This PR regresses
"built-in/inlined" asset hot reloading (previously provided by the
DebugAssetServer). I'm intentionally punting this because I think it can
be cleanly implemented with "multiple asset sources" by registering a
"debug asset source" (ex: `debug://bevy_pbr/src/render/pbr.wgsl` asset
paths) in combination with an AssetWatcher for that asset source and
support for "manually loading pats with asset bytes instead of
AssetReaders". The old DebugAssetServer was quite nasty and I'd love to
avoid that hackery going forward.
* **Investigate ways to remove double-parsing meta files**: Parsing meta
files currently involves parsing once with "minimal" versions of the
meta file to extract the type name of the loader/processor config, then
parsing again to parse the "full" meta. This is suboptimal. We should be
able to define custom deserializers that (1) assume the loader/processor
type name comes first (2) dynamically looks up the loader/processor
registrations to deserialize settings in-line (similar to components in
the bevy scene format). Another alternative: deserialize as dynamic
Reflect objects and then convert.
* **More runtime loading configuration**: Support using the Handle type
as a hint to select an asset loader (instead of relying on AssetPath
extensions)
* **More high level Processor trait implementations**: For example, it
might be worth adding support for arbitrary chains of "asset transforms"
that modify an in-memory asset representation between loading and
saving. (ex: load a Mesh, run a `subdivide_mesh` transform, followed by
a `flip_normals` transform, then save the mesh to an efficient
compressed format).
* **Bevy Scene Handle Deserialization**: (see the relevant [Draft TODO
item](#draft-todo) for context)
* **Explore High Level Load Interfaces**: See [this
discussion](#discuss-on_loaded-high-level-interface) for one prototype.
* **Asset Streaming**: It would be great if we could stream Assets (ex:
stream a long video file piece by piece)
* **ID Exchanging**: In this PR Asset Handles/AssetIds are bigger than
they need to be because they have a Uuid enum variant. If we implement
an "id exchanging" system that trades Uuids for "efficient runtime ids",
we can cut down on the size of AssetIds, making them more efficient.
This has some open design questions, such as how to spawn entities with
"default" handle values (as these wouldn't have access to the exchange
api in the current system).
* **Asset Path Fixup Tooling**: Assets that inline asset paths inside
them will break when an asset moves. The asset system provides the
functionality to detect when paths break. We should build a framework
that enables formats to define "path migrations". This is especially
important for scene files. For editor-generated files, we should also
consider using UUIDs (see other bullet point) to avoid the need to
migrate in these cases.

---------

Co-authored-by: BeastLe9enD <beastle9end@outlook.de>
Co-authored-by: Mike <mike.hsu@gmail.com>
Co-authored-by: Nicola Papale <nicopap@users.noreply.github.com>
rdrpenguin04 pushed a commit to rdrpenguin04/bevy that referenced this pull request Jan 9, 2024
# Objective

- Replace md5 by another hasher, as suggested in
bevyengine#8624 (comment)
- md5 is not secure, and is slow. use something more secure and faster

## Solution

- Replace md5 by blake3


Putting this PR in the 0.12 as once it's released, changing the hash
algorithm will be a painful breaking change
Maximetinu pushed a commit to Sophya/bevy that referenced this pull request Feb 12, 2024
- No point in keeping Meshes/Images in RAM once they're going to be sent
to the GPU, and kept in VRAM. This saves a _significant_ amount of
memory (several GBs) on scenes like bistro.
- References
  - bevyengine#1782
  - bevyengine#8624

- Augment RenderAsset with the capability to unload the underlying asset
after extracting to the render world.
- Mesh/Image now have a cpu_persistent_access field. If this field is
RenderAssetPersistencePolicy::Unload, the asset will be unloaded from
Assets<T>.
- A new AssetEvent is sent upon dropping the last strong handle for the
asset, which signals to the RenderAsset to remove the GPU version of the
asset.

---

- Added `AssetEvent::NoLongerUsed` and
`AssetEvent::is_no_longer_used()`. This event is sent when the last
strong handle of an asset is dropped.
- Rewrote the API for `RenderAsset` to allow for unloading the asset
data from the CPU.
- Added `RenderAssetPersistencePolicy`.
- Added `Mesh::cpu_persistent_access` for memory savings when the asset
is not needed except for on the GPU.
- Added `Image::cpu_persistent_access` for memory savings when the asset
is not needed except for on the GPU.
- Added `ImageLoaderSettings::cpu_persistent_access`.
- Added `ExrTextureLoaderSettings`.
- Added `HdrTextureLoaderSettings`.

- Asset loaders (GLTF, etc) now load meshes and textures without
`cpu_persistent_access`. These assets will be removed from
`Assets<Mesh>` and `Assets<Image>` once `RenderAssets<Mesh>` and
`RenderAssets<Image>` contain the GPU versions of these assets, in order
to reduce memory usage. If you require access to the asset data from the
CPU in future frames after the GLTF asset has been loaded, modify all
dependent `Mesh` and `Image` assets and set `cpu_persistent_access` to
`RenderAssetPersistencePolicy::Keep`.
- `Mesh` now requires a new `cpu_persistent_access` field. Set it to
`RenderAssetPersistencePolicy::Keep` to mimic the previous behavior.
- `Image` now requires a new `cpu_persistent_access` field. Set it to
`RenderAssetPersistencePolicy::Keep` to mimic the previous behavior.
- `MorphTargetImage::new()` now requires a new `cpu_persistent_access`
parameter. Set it to `RenderAssetPersistencePolicy::Keep` to mimic the
previous behavior.
- `DynamicTextureAtlasBuilder::add_texture()` now requires that the
`TextureAtlas` you pass has an `Image` with `cpu_persistent_access:
RenderAssetPersistencePolicy::Keep`. Ensure you construct the image
properly for the texture atlas.
- The `RenderAsset` trait has significantly changed, and requires
adapting your existing implementations.
  - The trait now requires `Clone`.
- The `ExtractedAsset` associated type has been removed (the type itself
is now extracted).
  - The signature of `prepare_asset()` is slightly different
- A new `persistence_policy()` method is now required (return
RenderAssetPersistencePolicy::Unload to match the previous behavior).
- Match on the new `NoLongerUsed` variant for exhaustive matches of
`AssetEvent`.
Maximetinu pushed a commit to Sophya/bevy that referenced this pull request Feb 12, 2024
- No point in keeping Meshes/Images in RAM once they're going to be sent
to the GPU, and kept in VRAM. This saves a _significant_ amount of
memory (several GBs) on scenes like bistro.
- References
  - bevyengine#1782
  - bevyengine#8624

- Augment RenderAsset with the capability to unload the underlying asset
after extracting to the render world.
- Mesh/Image now have a cpu_persistent_access field. If this field is
RenderAssetPersistencePolicy::Unload, the asset will be unloaded from
Assets<T>.
- A new AssetEvent is sent upon dropping the last strong handle for the
asset, which signals to the RenderAsset to remove the GPU version of the
asset.

---

- Added `AssetEvent::NoLongerUsed` and
`AssetEvent::is_no_longer_used()`. This event is sent when the last
strong handle of an asset is dropped.
- Rewrote the API for `RenderAsset` to allow for unloading the asset
data from the CPU.
- Added `RenderAssetPersistencePolicy`.
- Added `Mesh::cpu_persistent_access` for memory savings when the asset
is not needed except for on the GPU.
- Added `Image::cpu_persistent_access` for memory savings when the asset
is not needed except for on the GPU.
- Added `ImageLoaderSettings::cpu_persistent_access`.
- Added `ExrTextureLoaderSettings`.
- Added `HdrTextureLoaderSettings`.

- Asset loaders (GLTF, etc) now load meshes and textures without
`cpu_persistent_access`. These assets will be removed from
`Assets<Mesh>` and `Assets<Image>` once `RenderAssets<Mesh>` and
`RenderAssets<Image>` contain the GPU versions of these assets, in order
to reduce memory usage. If you require access to the asset data from the
CPU in future frames after the GLTF asset has been loaded, modify all
dependent `Mesh` and `Image` assets and set `cpu_persistent_access` to
`RenderAssetPersistencePolicy::Keep`.
- `Mesh` now requires a new `cpu_persistent_access` field. Set it to
`RenderAssetPersistencePolicy::Keep` to mimic the previous behavior.
- `Image` now requires a new `cpu_persistent_access` field. Set it to
`RenderAssetPersistencePolicy::Keep` to mimic the previous behavior.
- `MorphTargetImage::new()` now requires a new `cpu_persistent_access`
parameter. Set it to `RenderAssetPersistencePolicy::Keep` to mimic the
previous behavior.
- `DynamicTextureAtlasBuilder::add_texture()` now requires that the
`TextureAtlas` you pass has an `Image` with `cpu_persistent_access:
RenderAssetPersistencePolicy::Keep`. Ensure you construct the image
properly for the texture atlas.
- The `RenderAsset` trait has significantly changed, and requires
adapting your existing implementations.
  - The trait now requires `Clone`.
- The `ExtractedAsset` associated type has been removed (the type itself
is now extracted).
  - The signature of `prepare_asset()` is slightly different
- A new `persistence_policy()` method is now required (return
RenderAssetPersistencePolicy::Unload to match the previous behavior).
- Match on the new `NoLongerUsed` variant for exhaustive matches of
`AssetEvent`.
Maximetinu pushed a commit to Sophya/bevy that referenced this pull request Feb 12, 2024
- No point in keeping Meshes/Images in RAM once they're going to be sent
to the GPU, and kept in VRAM. This saves a _significant_ amount of
memory (several GBs) on scenes like bistro.
- References
  - bevyengine#1782
  - bevyengine#8624

- Augment RenderAsset with the capability to unload the underlying asset
after extracting to the render world.
- Mesh/Image now have a cpu_persistent_access field. If this field is
RenderAssetPersistencePolicy::Unload, the asset will be unloaded from
Assets<T>.
- A new AssetEvent is sent upon dropping the last strong handle for the
asset, which signals to the RenderAsset to remove the GPU version of the
asset.

---

- Added `AssetEvent::NoLongerUsed` and
`AssetEvent::is_no_longer_used()`. This event is sent when the last
strong handle of an asset is dropped.
- Rewrote the API for `RenderAsset` to allow for unloading the asset
data from the CPU.
- Added `RenderAssetPersistencePolicy`.
- Added `Mesh::cpu_persistent_access` for memory savings when the asset
is not needed except for on the GPU.
- Added `Image::cpu_persistent_access` for memory savings when the asset
is not needed except for on the GPU.
- Added `ImageLoaderSettings::cpu_persistent_access`.
- Added `ExrTextureLoaderSettings`.
- Added `HdrTextureLoaderSettings`.

- Asset loaders (GLTF, etc) now load meshes and textures without
`cpu_persistent_access`. These assets will be removed from
`Assets<Mesh>` and `Assets<Image>` once `RenderAssets<Mesh>` and
`RenderAssets<Image>` contain the GPU versions of these assets, in order
to reduce memory usage. If you require access to the asset data from the
CPU in future frames after the GLTF asset has been loaded, modify all
dependent `Mesh` and `Image` assets and set `cpu_persistent_access` to
`RenderAssetPersistencePolicy::Keep`.
- `Mesh` now requires a new `cpu_persistent_access` field. Set it to
`RenderAssetPersistencePolicy::Keep` to mimic the previous behavior.
- `Image` now requires a new `cpu_persistent_access` field. Set it to
`RenderAssetPersistencePolicy::Keep` to mimic the previous behavior.
- `MorphTargetImage::new()` now requires a new `cpu_persistent_access`
parameter. Set it to `RenderAssetPersistencePolicy::Keep` to mimic the
previous behavior.
- `DynamicTextureAtlasBuilder::add_texture()` now requires that the
`TextureAtlas` you pass has an `Image` with `cpu_persistent_access:
RenderAssetPersistencePolicy::Keep`. Ensure you construct the image
properly for the texture atlas.
- The `RenderAsset` trait has significantly changed, and requires
adapting your existing implementations.
  - The trait now requires `Clone`.
- The `ExtractedAsset` associated type has been removed (the type itself
is now extracted).
  - The signature of `prepare_asset()` is slightly different
- A new `persistence_policy()` method is now required (return
RenderAssetPersistencePolicy::Unload to match the previous behavior).
- Match on the new `NoLongerUsed` variant for exhaustive matches of
`AssetEvent`.
Maximetinu pushed a commit to Sophya/bevy that referenced this pull request Feb 12, 2024
- No point in keeping Meshes/Images in RAM once they're going to be sent
to the GPU, and kept in VRAM. This saves a _significant_ amount of
memory (several GBs) on scenes like bistro.
- References
  - bevyengine#1782
  - bevyengine#8624

- Augment RenderAsset with the capability to unload the underlying asset
after extracting to the render world.
- Mesh/Image now have a cpu_persistent_access field. If this field is
RenderAssetPersistencePolicy::Unload, the asset will be unloaded from
Assets<T>.
- A new AssetEvent is sent upon dropping the last strong handle for the
asset, which signals to the RenderAsset to remove the GPU version of the
asset.

---

- Added `AssetEvent::NoLongerUsed` and
`AssetEvent::is_no_longer_used()`. This event is sent when the last
strong handle of an asset is dropped.
- Rewrote the API for `RenderAsset` to allow for unloading the asset
data from the CPU.
- Added `RenderAssetPersistencePolicy`.
- Added `Mesh::cpu_persistent_access` for memory savings when the asset
is not needed except for on the GPU.
- Added `Image::cpu_persistent_access` for memory savings when the asset
is not needed except for on the GPU.
- Added `ImageLoaderSettings::cpu_persistent_access`.
- Added `ExrTextureLoaderSettings`.
- Added `HdrTextureLoaderSettings`.

- Asset loaders (GLTF, etc) now load meshes and textures without
`cpu_persistent_access`. These assets will be removed from
`Assets<Mesh>` and `Assets<Image>` once `RenderAssets<Mesh>` and
`RenderAssets<Image>` contain the GPU versions of these assets, in order
to reduce memory usage. If you require access to the asset data from the
CPU in future frames after the GLTF asset has been loaded, modify all
dependent `Mesh` and `Image` assets and set `cpu_persistent_access` to
`RenderAssetPersistencePolicy::Keep`.
- `Mesh` now requires a new `cpu_persistent_access` field. Set it to
`RenderAssetPersistencePolicy::Keep` to mimic the previous behavior.
- `Image` now requires a new `cpu_persistent_access` field. Set it to
`RenderAssetPersistencePolicy::Keep` to mimic the previous behavior.
- `MorphTargetImage::new()` now requires a new `cpu_persistent_access`
parameter. Set it to `RenderAssetPersistencePolicy::Keep` to mimic the
previous behavior.
- `DynamicTextureAtlasBuilder::add_texture()` now requires that the
`TextureAtlas` you pass has an `Image` with `cpu_persistent_access:
RenderAssetPersistencePolicy::Keep`. Ensure you construct the image
properly for the texture atlas.
- The `RenderAsset` trait has significantly changed, and requires
adapting your existing implementations.
  - The trait now requires `Clone`.
- The `ExtractedAsset` associated type has been removed (the type itself
is now extracted).
  - The signature of `prepare_asset()` is slightly different
- A new `persistence_policy()` method is now required (return
RenderAssetPersistencePolicy::Unload to match the previous behavior).
- Match on the new `NoLongerUsed` variant for exhaustive matches of
`AssetEvent`.
dekirisu pushed a commit to dekirisu/bevy_gltf_trait that referenced this pull request Jul 7, 2024
# Objective
- No point in keeping Meshes/Images in RAM once they're going to be sent
to the GPU, and kept in VRAM. This saves a _significant_ amount of
memory (several GBs) on scenes like bistro.
- References
  - bevyengine/bevy#1782
  - bevyengine/bevy#8624 

## Solution
- Augment RenderAsset with the capability to unload the underlying asset
after extracting to the render world.
- Mesh/Image now have a cpu_persistent_access field. If this field is
RenderAssetPersistencePolicy::Unload, the asset will be unloaded from
Assets<T>.
- A new AssetEvent is sent upon dropping the last strong handle for the
asset, which signals to the RenderAsset to remove the GPU version of the
asset.

---

## Changelog
- Added `AssetEvent::NoLongerUsed` and
`AssetEvent::is_no_longer_used()`. This event is sent when the last
strong handle of an asset is dropped.
- Rewrote the API for `RenderAsset` to allow for unloading the asset
data from the CPU.
- Added `RenderAssetPersistencePolicy`.
- Added `Mesh::cpu_persistent_access` for memory savings when the asset
is not needed except for on the GPU.
- Added `Image::cpu_persistent_access` for memory savings when the asset
is not needed except for on the GPU.
- Added `ImageLoaderSettings::cpu_persistent_access`.
- Added `ExrTextureLoaderSettings`.
- Added `HdrTextureLoaderSettings`.

## Migration Guide
- Asset loaders (GLTF, etc) now load meshes and textures without
`cpu_persistent_access`. These assets will be removed from
`Assets<Mesh>` and `Assets<Image>` once `RenderAssets<Mesh>` and
`RenderAssets<Image>` contain the GPU versions of these assets, in order
to reduce memory usage. If you require access to the asset data from the
CPU in future frames after the GLTF asset has been loaded, modify all
dependent `Mesh` and `Image` assets and set `cpu_persistent_access` to
`RenderAssetPersistencePolicy::Keep`.
- `Mesh` now requires a new `cpu_persistent_access` field. Set it to
`RenderAssetPersistencePolicy::Keep` to mimic the previous behavior.
- `Image` now requires a new `cpu_persistent_access` field. Set it to
`RenderAssetPersistencePolicy::Keep` to mimic the previous behavior.
- `MorphTargetImage::new()` now requires a new `cpu_persistent_access`
parameter. Set it to `RenderAssetPersistencePolicy::Keep` to mimic the
previous behavior.
- `DynamicTextureAtlasBuilder::add_texture()` now requires that the
`TextureAtlas` you pass has an `Image` with `cpu_persistent_access:
RenderAssetPersistencePolicy::Keep`. Ensure you construct the image
properly for the texture atlas.
- The `RenderAsset` trait has significantly changed, and requires
adapting your existing implementations.
  - The trait now requires `Clone`.
- The `ExtractedAsset` associated type has been removed (the type itself
is now extracted).
  - The signature of `prepare_asset()` is slightly different
- A new `persistence_policy()` method is now required (return
RenderAssetPersistencePolicy::Unload to match the previous behavior).
- Match on the new `NoLongerUsed` variant for exhaustive matches of
`AssetEvent`.
github-merge-queue bot pushed a commit that referenced this pull request Sep 17, 2024
…15058)

# Objective

Asset processing (added as part of #8624) is a powerful, high-impact
feature, but has been widely underused (and underdeveloped) due to poor
developer understanding.

## Solution

In this PR, I've documented what asset processing is, why it's useful,
and pointed users to the two primary entry points.

While I would like substantially more involved practical examples for
how to perform common asset-processing tasks, I've split them out from
this PR for ease of review (and actually submitting this for review
before the weekend).

We should add bread crumbs from the module docs to these docs, but
whether we add that here or in #15056 depends on which gets merged
first.

---------

Co-authored-by: Carter Anderson <mcanders1@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
A-Assets Load files from disk to use for things like images, models, and sounds C-Feature A new feature, making something new possible C-Usability A targeted quality-of-life change that makes Bevy easier to use X-Controversial There is active debate or serious implications around merging this PR
Projects
None yet
Development

Successfully merging this pull request may close these issues.