-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to handle backwards compatibility in future glTF versions #806
Comments
I'm pretty sure we would follow Khronos conventions, which are basically semver. glTF 1.1 is a unique case; it is a "bug fix" release that required some breaking changes. I wouldn't expect a release like this again; only a new x.0 release would include breaking changes. |
This is not so much about the version number itself. The question is rather related to the fact that the file may have to be read entirely before its version number can be figured out. So even if it was called glTF 2.0, one still would have to (at least implicitly) support all properties from 1.0, regardless of the For 1.1, I think that the number of actual breaking changes is small enough, so that everybody can cope with them (even if they may require one or the other quirky workaround.) And in general, these are things that should not govern decisions that have to be made to keep the format healthy. (Maybe I'm just overestimating the difficulties here. All this mainly applies to loaders that do not read the JSON into an anonymous, untyped DOM first: For these loaders, any removed property would cause a breaking incompatibility. But it's not unlikely that using an untyped DOM would in fact be the more sustainable strategy. Even if the question about the actual data types for different major versions still has to be answered, it could at least make the loader itself a bit more resilient) |
I don't think you're overestimating. I remember firsthand maintaining some code that could ingest multiple past versions of Lightwave objects. I think Lightwave did a great job with both forwards and backwards compatibility, but it didn't happen automatically, they designed for extensibility up front. The maintainers of glTF should not underestimate how quickly the Internet will become overrun by obsolete glTF files that users still care about. The user-perceived stability of the standard will rest on how well it embraces its own past and future. |
Ah yes, I was only considering JavaScript.
Could be viable. We need to look at the tradeoff between keeping the spec clean and how non-JavaScript loaders can quickly/easily parse out just the @mre4ce do you have any experience here? |
The fact that I generated the Java classes based on the schema was basically a mix of an "experiment", "laziness" and the attempt to strictly obey the specs. This may not be the most sustainable solution in general, and again, this should not govern any decisions here. A (possibly far-fetched) analogy: As another experiment, I recently created classes for COLLADA from the schema ( https://github.com/javagl/JCollada , do not take this too serious!). Of course, this causes entirely different classes for the different versions, e.g. I'm also curious about ideas by people who have more experience with file format versioning here... |
I am the developer of yocto_gltf [https://github.com/xelatihy/yocto-gl] which is a loader that load into a type object hierarchy for C++. The loading code and C++ hierarchy is automatically generated from the JSON schema. I started working on glTF 1.1 and the incompatibility causes two issues. First, the object hierarchy and parsing code is not compatible any more. This means that an entirely new data structure is required. One could manually patch things up, but one thing I like about yocto_gltf is that the code ia automatically generated from the spec, so I am sure that all error checking and conversion code is correct, and not manually tuned. Unfortunately this also means that backward incompatibility (and breaking changes to the JSON schema) disallow any manual fixing of the backward incompatible changes. Obviously I could switch to manually write a parser, just like many other code bases. But then I loose the correct by default implementation. This is not to say that changes are bad. In fact, I did suggest some myself in other issues. But just to share an experience of a library for a strongly typed language. |
BTW, for strongly typed codebases the only option is to have multiple implementation and switch the loader by checking the gltf version at load time. for yocto_gltf this is trivial since the code is automatically generated so supporting multiple versions is very easy. Note that you can have codebases that are not not strongly types even in strongly-type languages by manually handling incompatible changes. |
It seems like the idea of auto-generating code from the schema lends itself for strongly typed languages. I started looking at the glTF part of yocto-gl, and still have to take a closer look at the actual "parsing" code, but if I understood this correctly, then it still follows the two-step process:
This still allows you to follow the proposed approach:
You could "peek" into the anonymous JSON DOM to figure out the version, and create an appropriate instance of the (auto-generated) glTF root object for the respective version (Pseudocode: )
} This still brings the necessity to have the whole set of glTF types for each version, even though most of them will be equal, and the few differences may be minor, and might even just refer to whether a property is required or not.
That's exactly the same issue that I ran into for https://github.com/javagl/JglTF/tree/master/jgltf-impl. An additional point here is: When I'm using these classes, I'm not creating a DOM. Instead, I'm using the Jackson library, which magically reads the JSON input directly into the model classes (using data binding - under the hood, this uses loads of reflection magic, but works remarkably well). From quickly skimming over the README and the code, I did not see how you are generating the implementation classes in yocto-gl. Do you have options to "tweak" or "configure" the generator so that it may be able to generate classes that are capable of representing 1.0 and 1.1 glTF structures? (For JglTF, I considered a workaround here: Creating a fork of the schema spec, and tweak the schema so that the changes betwen 1.0 and 1.1 are written in a form that causes backward-compatible code to be generated - but I did not yet analyze whether this is a viable approach...) |
To answer a few of the questions above regarding yocto_gltf:
|
One last thing: the idea of code generation from the spec is not new in OpenGL. Many extension loading libraries are automatically created from the Khronos OpenGL specs. For example https://github.com/Dav1dde/glad. yocto_glf tries to do the same for glTF. The problem is that the specs (i.e. schema) have not been developed for strongly typed languages, while the OpenGL specs are developed as strongly typed. Again, with a few small modification, we can handle glTF just like GL and leave the spec the same. Just a few changes to the way the schemas are written. |
I'm not entirely sure what possible schema changes you refer to. There are some places where one either has to refer to an Nevertheless, I'd at least be curious what changes you would suggest (regardless of whether they can or will be integrated). |
Probably a certain amount of "untypedness" may be required if material values can bound to anything – sadly this shift the burden on the implementation to perform error checking and proper API call. There are other way to specify this thought that would allow for an easier strongly type feel, for example using type-safe unions (std::variant in C++, standard in all FP languages). As for the schema changes, the main issues are specifying what to do with parameter/type inheritance and be consistent thought the schema. Also important would be to provide specific type names for the objects. For JS this neither of this is a concern since JS is a prototype language. But it may come back to byte if one wants to switch to Typescript or the newer JS variants. |
@xelatihy I know, the question is a bit vague, but: Do you already have plans about how to tackle glTF 2.0? The structures will be "incompatible", and I wonder how the structures (which will have to be entirely different classes in a typed world) may be passed to the consumer. My gut feeling right now is that, in a typed language, there would have to be something like a |
Well, the question is not vague. I have being thinking about it too. Here are the options I considered.
For yocto_glft, I plane to support only glTF 2 since I do not think there is enough of a user |
BTW, supporting maps-or-arrays in a typed language will be "fun" :-( |
Yes, the change from maps/objects to arrays sounds like a small thing (as in JSON, it roughly boils down to changing Regarding the options:
|
One think that seems interesting is that glTF might run at best performance on pure GL-only renderers (not sure how many are there). For the others, the GL-only indirections are a slows down anyway. So, you then convert to your own internal format. Arrays-vs-maps are only a concerns if want to consume it as is, but then any API change in GL/Vulkan might require a drastic glTF change. So portability of the assets and their longevity might not be great in the end. |
I think that some of the basic concepts that make glTF compact and fast are shared by the graphics APIs - e.g. raw chunks of And when seeing it like that - glTF being one of the most basic possible descriptions of what has to be rendered - then it might not matter so much whether a particular JSON structure "nicely fits" a certain renderer implementation, but rather that it does not have a large, complicated overhead. And in this regard, I think that e.g. measuring the performance difference between a map-based JSON and an array-based JSON asset description would eventually boil down to measuring the performance of the JSON parser that is actually used there... However, I hope that the upcoming changes are carefully planned in view of the requirements for Vulkan and DirectX - and in order to keep the momentum of glTF, it's certainly better to make breaking changes early and in one run than later in many small iterations. Nevertheless, supporting both versions at the same time will be challenging on every level - for the loader, the consumers/pipelines, and the renderers. I'm curious to see how e.g. the validator will cope with this, maybe this can be a source of inspiration. |
#819 could be the next step to align glTF data structures with modern APIs even more, but the same could be also achieved by adding additional reqs for existing glTF objects without major changes (like requiring all interleaved accessors of the same mesh to use the same
That's correct. Using string-based IDs as "properties" could lead to noticeable processing overhead. Main culprits are usually |
Agreed on all counts. |
@xelatihy There's a first 2.0 version by @lexaknyazev at https://github.com/KhronosGroup/glTF/tree/2.0 with the updated JSON schema (arrays+indices instead of dictionaries+ids). Although there may still be changes for finalizing 2.0, this can serve as a basis for tests of how to cope with the maps-vs-arrays change. |
This is a great discussion and I am also hitting similar questions when flushing out some of the changes in #830 Following the semver versioning means-
So say we start off with 2.0 as the Major version that was incompatible with previous 1.0 version. Then we want to have 2.x which can add more functionality but is backwards-compatible with 2.0. But what does that mean for parsers/engines?
Here is a practical example with #830. If we went with what is currently suggested then model will be an
But say in 2.x we add another possible model. Since this would be an additional enum a 2.0 engine will have no idea what to do with it and can no longer read the file.
In 2.x an additional material model could be added while still requiring that the older model be present for backwards compat. So it could be something like this-
An older 2.0 engine could still continue to read the METAL_ROUGHNESS properties while ignoring the SPEC_GLOSS properties. So its possible to have some type of forwards compat too but we need to decide whether that's an explicit requirement to support. What are everyone's expectations around this? @pjcozzi @lexaknyazev |
New ( Comparable example: |
On the other hand, if you need to serve clients with different decoding caps (think any major video streaming service), you would provide them different versions of the same video via something like MPEG-DASH. With that in mind, we could think of desired "atomicity" level of glTF asset. |
I was only anticipating backwards compatibility here. So far, the glTF community has shown an impressive willingness to upgrade quickly (2.0 will be the ultimate test!). I understand this isn't as easy for desktop apps. For simplicity, which is key to supporting glTF adoption, I am hesitate to suggest forward compatibility as I expect it would be hard to ensure in practice and we might end up bumping to 3.x sooner than expected when we find out that we can't support it. @sbtron I am open to ideas if this is an important use case for you and you think this is a reasonable path that generalizes well. |
@lexaknyazev @sbtron did we spell out the forward compatibility in the spec or some best practices guide? OK to close this? |
@pjcozzi Versioning is spelled out in the spec here: https://github.com/KhronosGroup/glTF/tree/master/specification/2.0#versioning There are some discussions in the PR for background: #899 (comment) |
Perfect, this can be closed. |
@javagl said:
The text was updated successfully, but these errors were encountered: