-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to streaming file glTF 2.0 from server ? #1238
Comments
@ProFive for an HLOD approach to streaming, check out 3D Tiles which is a spatial data structure containing nodes with glTF. For progressive streaming, there is interest (but no extension AFAIK) in using POP buffers: https://x3dom.org/pop/ Note that 3D Tiles and pop buffers could even be used together. Definitely let me know if you do that! 😄 CC @mlimper |
I have a hunch that progressive streaming could be done without an extension... For example you might write a serverside layer to stream bufferViews gradually over a websocket, and a web client can update gradually... to me this is an area where I'd like to see it implemented a few times, and then if there's clear benefit to turning that into a standard, we can figure out how to do so. Also worth considering: #1045 |
I think it makes sense to differentiate between two kinds of streaming here:
This is case 2, and we made a kind of extension draft quite a while ago (glTF 1.0), checking what would be necessary to support POP buffers or similar progressive LOD methods. The idea was to provide an additional layer of
If there is no such thing as a progressive LOD / reordering of the mesh data, and if indexed rendering is being used, getting aribtrary portions of a Regarding case 1, the client may download the main glTF JSON file, quickly analyze it and then prioritize and manage the downloads of all the
I guess the experiences we made with large CAD models are very similar - using spatial data structures to make sure to always just "download the right things" is very important. @ProFive Given that your scene has almost 20K geometries, maybe you can already get pretty far with prioritized downloads? Or is this something you already do, and you really miss the streaming of data for individual meshes? Thanks, and sorry for the long post :-P |
It probably makes sense to distinguish both cases and put them into different threads: Case 1 (progressively streaming scene content) does not require any modifications to glTF (as far as I can see), agree it should probably be discussed within #755. Case 2 (progressively streaming a single mesh geometry) could require modifications to the spec, or an extension. I guess it could be discussed within this thread, but please go ahead and do whatever you believe works best. |
@mlimper would it make sense to consider also the use-case where one wants to stream an animation in real-time? |
Hm... I'm not sure if I get the idea correctly - you mean a case where the animation data is so big that it needs to be cut down into multiple streamable chunks? Like, having a lot of 3D position data for different morph targets, and loading them one by one? |
That could be a use-case. Another one could be that an application (simulation, game, etc) can transmit the content (buffers, meshes, poses, animation tracks) in real-time for remote visualization. Is this last one beyond the scope of glTF? |
One case that is already available is to put different animations into different
You'd need something considerably more complex than glTF to have one application reading from a file at the same time as another application is writing arbitrary data into it. That feels more like the role of an interchange format, perhaps. But I imagine someone could define an extension that allows open-ended buffers for animation samplers, allowing animation to stream in without fundamentally changing the structure of the scene. |
@donmccurdy Is there any reference for how to do this? We're just getting started in this field (rendering glTF in the browser via A-Frame) and quickly hitting network limitations. Not sure if this is normal, but our bin file for :20s of video when exported to glTF is 4Gb (recorded from lidar ipad). |
@synth that's a lot of data for 20s of video, yes. I'd expect that raw LiDAR data coming from the iPad will need a lot of processing to optimize, cleaning up and simplifying geometry. If it has textures, those would need to be optimized or baked to vertex data too. If each mesh is only going to be shown for a fraction of a second, you'll want each mesh to be pretty small. The animation styles that work best in glTF are keyframe T/R/S, skinning, and shape keys / morph targets. There are some formats like Alembic that focus specifically on streaming a geometry sequence, but it's pretty hard to support on the web and A-Frame does not, as far as I know. If you've got the meshes down to a reasonable size, you can split each mesh into a separate
Depending on your use case you might want to group things into |
Got it, thank you @donmccurdy !! |
@donmccurdy, Is there any way to split a single animation into multiple files? Our model has only one animation which has ~500 frames, and we want to split it and stream to client. Currently, we're streaming the individual geometries for every frame and updating it. We're thinking streaming animation data is more efficient than that, but stuck on how to do it. |
@CITIZENDOT strictly to answer the question, the keyframe data from a single channel in a single animation cannot be further subdivided. For example, the "translation" keyframes for joint "LeftFoot" must all be contained in a single accessor and buffer, and thus the same file. However:
|
Yes, We have a alembic file which contains animation data. Currently we're extracting OBJ files for every frame and compressing them using draco. Each draco frame/file is rendered as a frame.
That's what we're going for.
My line of thought is:
But I found, If there is better workflow than this to stream and render animation data, We'd be grateful to know. |
I should perhaps rephrase this. When using a morph target animation, the keyframe data that defines when each "frame" is shown will be fairly small. The definition of what that frame looks like — the morph target — may or may not be any smaller than the sequence of meshes you are using now.
The glTF specification has no opinion here, so a thread in https://discourse.threejs.org/ might be the best place to discuss it. A "workflow ... to stream and render animation data" can mean a lot of different things depending on the specific animation involved; I expect it will be hard to give useful advice without more technical details here. Your use case also very much like what Vertex Animation Textures (VAT) are intended to solve in software like Houdini, Unity, and Unreal Engine. VAT is not a feature of glTF or three.js, but advanced users have occasionally implemented the technique in three.js (example). |
Hi all,
I want to streaming file glTF 2.0 from server because the models is very very big(huge model). I try to use https://github.com/fl4re/rest3d-new but it supports streaming only file glTF 1.0
Information of file glTF 2.0:
Thanks and best regards!
The text was updated successfully, but these errors were encountered: