-
Notifications
You must be signed in to change notification settings - Fork 495
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
wasm: expose methods to dispose or recreate wasm instances #522
Comments
One other alternative that should be available right now (similar to 3) is to use dynamic imports, eg
... after which presumably you can just make sure MeshoptDecoder gets GC'ed entirely. For 1, I think it would be better if |
For webpack, dynamic imports did not seem to dispose loaded module when called again.
Maybe a secondary parameter for |
Looks like in Chrome at least, you must call |
Or would it be possible, to create chunk-based api, decoding buffers by chunk, making it possibly to use fixed-size wasm memory. |
I'm also interested in this. Working on a project that has strict memory requirements and would be great to be able to release memory being held by the MeshOptDecoder. |
Looks like there's some advances on @OrigamiDev-Pete in your application, are you using web workers ( |
At this stage we're decoding on the main thread. |
I think for now it would make sense to focus on the WebWorkers usage here. When a lot of geometry is processed, WebWorkers are probably beneficial either way, if only to avoid main thread stalls. On the opposite side, outside of WebWorkers the solutions here are... not clean. There's issues with recreating instances, both inside WebWorkers and outside of them due to promise handling and what not, and while the future solution here is pretty clearly Streaming/chunking solutions can be explored in the future, but also have issues due to more complicated state handoff that is necessary. This almost necessitates a second version of the Wasm/JS library if we want to keep the code size small... and there's a lot of subtleties here. On the flip side, we already should support a better way to handle worker management (calling To that end, I'll open a PR that supports |
Oh I should say that it’s simple to add something like For anyone running into this issue in a scenario where WebWorkers aren’t a reasonable solution this may be fine, and could be applied as a workaround by patching the module source. But it leads to issues if decoding is requested again after this call, and recreating and propagating the modules throughout existing workers is enough trouble that it might be better to wait for Wasm memory enhancements. |
Motivation:
The
WebAssembly.Memory
can only grow, not shrink, the only way to reclaim memory is after the wasm instance being garbege collectet. After decompressing a large mesh using meshoptimizer, the wasm memory would never release since it is defined in the module closure.Below is a screenshot from the memory view of chrome devtools:
Risks:
Third-party libraries using meshoptimizer may have assumptions on
MeshoptDecoder.ready
.Recreation of wasm might need to be async, so mechanism like double-buffering might be needed to keep backwards compatibility, or maybe create a seperate js module with manual instance managing in addition to existing ones.
Alternatives:
decodeGltfBufferAsync
is not terminated and can not be manually terminated on idle.meshopt_decoder_reference.js
, which would be much slower than wasm.meshopt_decoder.js
instead of module ones making it possible to detach theMeshoptDecoder
from global context.streaming
API for compressing or decompressing in small fixed-size chunks.The text was updated successfully, but these errors were encountered: