Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

OBJLoader n-gon support #11871

Merged
merged 1 commit into from
Aug 9, 2017
Merged

OBJLoader n-gon support #11871

merged 1 commit into from
Aug 9, 2017

Conversation

jbaicoianu
Copy link
Contributor

Adds support for n-gons to OBJLoader, by automatically triangulating them while loading.

This means no more offline triangulation step needed for OBJ files from Blocks or other sources.

@jbaicoianu
Copy link
Contributor Author

jbaicoianu commented Aug 1, 2017

Looks like I missed a whole lot of active conversation about this bug (#10438 and #10917). I'll address some of the concerns in the context of my own proposal:

"If a quad is not convex and planar, then the tessellation may be incorrect." (@WestLangley )
"As far as I can see this builds triangles by connecting each neighbouring pair of face vertices with the first vertex. So I believe this only works for convex faces ? And can potentially produce very high aspect ratio triangles ?" (@andreasplesch)

Both of these things are true, but I don't think this is the place to worry about that. If an editor is exporting non-planar n-gon faces, that's a bug with the editor, and you'll see undefined behavior wherever you try to use that model. This sort of thing should be fixed at the source (in the model file by triangulating manually, or by fixing the behavior of the editor that created the model to make this not possible) - we don't have to account for every possible degenerate case.

"Have you considered the performance here?" (@jonnenauha)
This approach should run faster than the existing approach. All face-related regexes were removed, in favor of a split-and-trim approach. This also removes all special cases for the three "different" formats (vertex, vertex/uv, and vertex/uv/normal) and replaces them with one codepath that handles them all equally.

@mrdoob brought up the idea of using ShapeUtils' shape tesselation functions, but that's probably overkill for this. The simple draw-edges-connecting-all-vertices-to-the-first-vertex approach works fine in every case we've tried so far, and is computationally very lightweight. The ShapeUtils code, however, is quite complex, and would introduce a lot of memory allocations for each face, so I'm not sure it's the way forward even if it ends up catching the last 1% of edge cases.

@WestLangley
Copy link
Collaborator

The simple draw-edges-connecting-all-vertices-to-the-first-vertex approach works fine in every case we've tried so far.

If a quad is not convex, such a tessellation will be correct only if you are lucky.

It seems reasonable to say that only convex polygonal faces are supported.

@jbaicoianu
Copy link
Contributor Author

@WestLangley yup - a modelling program might create a non-convex face and have some convention for how to construct faces for that n-gon, but formats like OBJ don't capture this. There are ```((n*n+1)..(2n-4))/(n-2!) ways to triangulate an n-gon, and without knowing which, all we can do is make a best guess.

I'd take it further and say only planar convex n-gons are supported. If someone has examples of files which have this sort of geometry being created regularly by a major modelling app, it would be a great test case for further improvements.

@jonnenauha
Copy link
Contributor

jonnenauha commented Aug 2, 2017

"should run faster" is not really what I want to see with these PRs :) You have to run simple, complex and huge models through your changes to get any idea how it behaves in V8 and other engines. Assuming better performance in even with trivial changes is up to the JS gods :) Especially in tight loops like what the line by line parser is doing thousands of iterations.

I do agree regex execs "should be slower" than your split(). But you are not only replacing regex with one split, you are doing a 2x more for loops and the first one does 1x split per iteration. With a quick look this will generate a lot more garbage arrays than the regex will do. From my earlier (quite a long time ago) profiling, generating a lot of arrays was the main thing that slowed this code down in even moderately big obj files. I had to abandon a few "optimizations" myself because it just slowed down the execution, even when it felt it will run faster.

No one needs my "looks good to me" to go forwards with these. I just hope people who make changes really do test with all the .obj models in the three.js repo and get some figures of pre/post changes execs. At minimum that all models render correctly. Obviously we can't put 100mb test files in the repo to test perf on those files and its not the usual use case for the loader.

Simpler code is always better, this looks like a good change if it brings in new features as well.

Edit: I do want to add that any perf penalty we incur on all users of this loader needs to be worth it. Essentially everyone happy with the current loader wont need this feature. Adding it will make a few more users happy, but if it slows down all the currents users use cases (I would assume that is thousands of people, seeing how popular three is and how easy it is to download/export/generate obj models), not sure if its a great trade off.

As said in the earlier PRs about this, I would hope someone would write a solution that has as close to zero overhead for the current users but still adds this useful feature for the people who are waiting for. I'm not saying this solution is not that, I hope it is. But my skepticism about the perf above stands, it really does seem like no perf comparison has been done reading the "should run faster" comment.

I know perf profiling JS is not an exact science and micro optimizations can be quite hard to pick up. Just pick a (preferably the biggest) model, do a simple performance.now() pre/post the load to get some figures. Run that in a loop, get the average and you'll start to get some kind of understanding whats going on. Do this with your branch/PR and with current HEAD. Chromes performance timeline recording and custom labeling there can also be useful. I made a test page that does this type of measuring for all the models in the repo and reported the figures to the console, it was very useful when doing changes to the loader.

@kaisalmen
Copy link
Contributor

kaisalmen commented Aug 2, 2017

Two ideas:

  • What about we keep a markdown file with hyperlinks to large OBJ files freely accessible on the Internet (exported from various programs Blender, Cinema 4D, 3ds Max, Maya, ZBrush, Blocks, etc.), so we have at least easy access to models for performance measurements
  • We add special cases (smoothing groups, multi-material definitions, n-gons, etc) to a manually crafted OBJ test file. It could evolve from something I started a couple of month ago or started from scratch.

The ngon support should be added to OBJLoader2 as well (external #12). I will look into it when OBJLoader2 reaches a stable state again (#11746) or port add it to the current versions (not sure, yet).

@zerohun
Copy link

zerohun commented Aug 2, 2017

Actually I made a similar PR(#11804) few weeks ago aiming minimum perform disadvantage.

@jbaicoianu
Copy link
Contributor Author

jbaicoianu commented Aug 2, 2017

@jonnenauha I put together a test case with all of the .obj files included with three.js and you're right, Walt Disney's head loads slower with this method. Most of the other models load in around the same time, but that model triggers a whole storm of garbage collection, using both the old and the new method. Since the new method does include more allocations, yes, the previous carefully-tuned garbage collection storm becomes a hurricane :)

I guess this has turned into a bigger task of "try to optimize the objloader to be more memory efficient". I feel like we can do better than regexes, but I'll come back after doing some experiments, and I'll check out what's going on with OBJLoader2 as well.

In the meantime, for those who wish to load models from Google Blocks and other sources which use n-gons, feel free to use this modified version of OBJLoader in your projects. In our tests we have found that the performance difference is an acceptable trade-off in most of the scenes our users are working with. They're excited to find that models which have been failing to load properly for years now suddenly work, and are relieved that they no longer have to remember to triangulate models before using them in their projects - no one's complained about large models taking longer to load yet. But until we can figure out how to do this in a way which doesn't hurt or possibly helps performance, I'll hold off on pushing for this to be included in mainline three.js.

@jonnenauha
Copy link
Contributor

jonnenauha commented Aug 3, 2017

Alright, again I'm not against merging this even if it slows down some models. Higher ups can do those decisions, I'm just a guy who pushed some commits to improve and make it faster (from what it was a long time ago). And that is what you are doing as well! But again imo the currently happy users should not be penalized too heavily, esp if you they don't need this feature (which I hard to estimate tbh, but there has been a lot of these issues, so its probably not so rare that I assume).

From my perspective OBJLoader(1) is "deprecated" atm. @kaisalmen made a faster version 2 that afaik feature matches this loader. I believe it does not use regex or work on a big string, but typed arrays. Also comes with an optional webworker loader so you don't have to block the main thread if your models are big.

As @kaisalmen said here (or perhaps in another PR related to this), he would be interested in supporting this feature in his loader. The loader is its own repo and obj spesific PRs and conversation can be done there. It can have its own independent release cycle, core devs and the lib is merged into three.js main when new stables are done. Maybe open a issue in https://github.com/kaisalmen/WWOBJLoader and you can discuss how it should be implemented there? Give links to models that need this feature so its easier to dev. Maybe backlink this issue there so people in this issue will see the github ref.

It's a bit of shame to be honest that @kaisalmen worked really long and hard on the loader and most three.js users do not even know the (most likely better for everyones use cases) loader exists. I'm very happy it gets merged to main three.js periodically, so the changes people find it get higher :)

Edit: Just noticed there is already an issue kaisalmen/WWOBJLoader#12 I tried starting a discussion of how to implement this. Overview of what source files it should touch, what functions, before any code is written. I'm sure @kaisalmen has some kind of idea of it already. Please add links to that issue to test files!

@kaisalmen
Copy link
Contributor

@jonnenauha thanks. I will provide hints and some initial thoughts as soon as I can. I was focused on evolution towards V2.0 of OBJLoader2, but the n-gon support could be integrated in the parser independently as it is not heavily affected by the things I am working on (easy merge in master and evolution branch).

@kaisalmen
Copy link
Contributor

@jbaicoianu , @zerohun , @jonnenauha with two proposed n-gon solutions for OBJLoader and a not yet-existing one for OBJLoader2 we should find a generic way to benchmark the different implementations.

I am thinking about a simple example page for every loader flavor that can be feed with OBJ files via URL params referencing relatively available OBJs. Every loader is executed 50 times in a loop and every parse execution time is measured. Then we get reproducible and comparable results. This gives us a good idea what is fast and what not. What do you think?

I could start prototype something, but not before mid next week.

@mrdoob
Copy link
Owner

mrdoob commented Aug 5, 2017

@jbaicoianu what do you think about #11804?

@jbaicoianu
Copy link
Contributor Author

@mrdoob haven't had time to do any benchmarks yet, so can't comment on performance differences. Part of me doesn't like that it's split out into different "modes", it seems like it's taking what should be a relatively simple file format and applying extra complexity to work around javascript's inefficiencies, and that there's got to be a better way. But that's the nature of the beast unfortunately, sometimes something which might seem a bit rube-goldberg can be better because of memory / gc behavior. So "it's not how I would have written it" isn't really a valid criticism here, what matters is the benchmark numbers and how well the functionality works with real-world examples.

I haven't had time to fully evaluate OBJLoader2 - the work @kaisalmen has been doing on worker-based loading is great, but up until now I haven't really checked it out because I'd already implemented worker-based loading in my own engine (in a harder-to-backport-to-three way, unfortunately). But it seems like there's a lot more struggle involved in the underlying OBJLoader implementations than I thought :) I didn't actually realize that WWOBJLoader used a different loader under the hood - going with an ArrayBuffer-based approach is definitely the way to go, I saw some good speed-up in experiments with that as well but nothing close to what I'd consider ready to share yet.

I like @kaisalmen's suggestion of setting up a benchmarking page to test different implementations. This could be a good effort to benefit all WebGL projects - an open repository of sample models from as many unique sources as we can find to test as much varied functionality as these formats support, and a harness for testing different implementations against them. This way we can remove some of the guess work and the frustration of trying to improve the loaders - we all know they could be improved, but it's such a careful balancing act when doing that work, sometimes feels like black magic trying to get something to work just right with your own use case, only to find that it has horrible effects on someone else's use case.

@kaisalmen ping me when you want to spend some time on this, I think it would be a great resource.

@mrdoob
Copy link
Owner

mrdoob commented Aug 9, 2017

I think for the time being I'll go with this approach 👌

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants