-
Notifications
You must be signed in to change notification settings - Fork 2.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
tesselate polygons instead of rendering with stencil buffer #682
Comments
Can you list the reasons why here too? Is it exchanging some of worker's time for improved rendering time? |
But if we can't make tessellation fast enough it might not be worth it. Did I miss anything @kkaefer? I also want to see if a simpler algorithm that produces less optimal output (more triangles) could be much faster. |
No, those are the reasons. |
@ansis this looks interesting: |
PNLTRI JS code doesn't look good though... I'm looking into porting https://code.google.com/p/poly2tri/source/browse/python/seidel.py?repo=archive&r=96caf237ea676601374ae9d492fe4feb8514b695 into some nice clean JS — it's very compact compared to all other algorithms, and also should be fast. Looks like it can support holes with some minor edits too. Here's a thread with the author of original poly2tri reporting the results of switching to Seidel's algorithm: http://www.box2d.org/forum/viewtopic.php?f=3&t=3430 Here's another thread where he says why he switched to Constrained Delaunay Triangulation: http://www.box2d.org/forum/viewtopic.php?f=3&t=3576 — basically because of better quality triangulation and handling weird polygons better. But the algorithm is also much more complex and bulky. |
What about the port at https://github.com/r3mi/poly2tri.js? |
@kkaefer looks well implemented and we should definitely try it, but it's pretty huge for my tastes. I have a feeling that the world needs a much simpler JS triangulation lib. :) |
Hi Guys, most fastest per my tests is libtess.js https://github.com/brendankenny/libtess.js , however for large polygons, bottleneck is still there, more info here: http://blog.sumbera.com/2014/07/28/webgl-polygons-fill-with-libtess-js/ |
@Sumbera interesting, thanks! I still wonder how it compares to poly2tri.js. |
N/A so far got error ln 822 poly2tri.js: ('poly2tri EdgeEvent: Collinear not supported!', [eq, p1, ep]); |
@Sumbera what if you run the data through http://sourceforge.net/p/jsclipper/wiki/documentation/#clipperlibclippersimplifypolygon before feeding it to poly2tri? |
@mourner - I didn't want to modify original (official CZ border) data, but I was able to make dirty fix in poly2tri so it passes, here is relative comparison for 120.186 polygon points in Chrome: libtess: 30 584 ms, poly2tri: 5 297 ms . |
@Sumbera wow, looks very promising — thanks for doing this! I wonder if we'll have the same kind of performance gain on lots of small polygons (like the building layer). |
@mourner I could run 2256 polygons ( municipality borders) (all together > 3M vertexes) with poly2tri 16 701 ms vs 127 834 ms (libtess), however I had to dirty fix other various errors from poly2tri (null triangles or "FLIP failed due to missing triangle...so some polygons were wrong..), while libtess was fine for the same data. If there are some 2d building data with more polygons but less vertexes I can try it too. |
@Sumbera awesome! Actually you could try it directly in this project, in the branch mentioned above https://github.com/mapbox/mapbox-gl-js/compare/triangulate |
Meanwhile I'm halfway porting poly2tri Seidel algorithm implementation to JS, could be interesting too. |
OK, finished porting the Seidel algorithm and it surprisingly seems to work! Still no hole handling and there may be some rough edges and bugs, but here's a result of it run on the poly2tri basic dude shape: Triangulating this dude 1000 times takes ~850ms in Chrome. Now going to benchmark it against poly2tri and then try it with the |
My Seidel port was running about 3-4 slower than poly2tri.js on most test shapes. After doing lots of fixes and optimizations it's now only ~30% slower (sometimes even faster on very simple shapes). It still doesn't handle holes and vertexes touching edges though. While it was an extremely valuable experiment (and I love to delve into isolated algorithmic challenges like this), I think we should go with poly2tri.js for now. The only issue with it is that you need to make sure there are no duplicate points before running it, but it's pretty well written and fast. I may revisit the Seidel's algorithm later. |
@mourner - thanks for doing this experiment, I can confirm for single polygon of 19416 vertexes, I have got similar results (750ms seidel vs 560ms poly2tri). Seidel was failing with larger polygons with stack overflow and overall triangulation was of worse quality - triangles were overlapping, gaps. |
Oh boy when it will ever stop, this Seidel algorithm is haunting me and giving me nightmares, but it looks like I won't settle down until it's awesome. Did a lot of optimizations and refactoring (along with some fixes, so there should be no overlapping triangles and stack overflows): So now it's 50-90% faster than poly2tri. There's still room for improvement — it's capped by n log(n), although on paper the full algorithm is n log*(n) — I just can't get some little details that are not covered by papers and books right. It also can't handle edges touching each other and other segments (not sure if it can be solved without clipper), and no hole support (although it is possible — I made it successfully support one hole at some point, but multiple holes needs more thought). Winding order doesn't matter though. |
Update: tried seidel in the triangulate branch. Performance turned out to be not as great — only about 70-80% faster than libtess in a dense DC area — mainly because libtess isn't actually much slower than poly2tri, it's only a bit slower in my tests. Also, some shapes (in particular big ones like water and landcover), but also some of the buildings are failing — once I fix the failing cases in seidel, performance should improve. Current isolated results: typical OSM building (15 vertices)
libtess x 21,478 ops/sec ±0.54% (98 runs sampled)
poly2tri x 27,603 ops/sec ±1.06% (93 runs sampled)
seidel x 55,798 ops/sec ±0.36% (100 runs sampled)
seidel is 102% faster than poly2tri (second fastest)
dude shape (94 vertices)
libtess x 4,681 ops/sec ±0.65% (101 runs sampled)
poly2tri x 3,971 ops/sec ±0.82% (101 runs sampled)
seidel x 7,852 ops/sec ±0.64% (100 runs sampled)
seidel is 68% faster than libtess (second fastest)
monkey (1204 vertices)
libtess x 354 ops/sec ±1.91% (93 runs sampled)
poly2tri x 278 ops/sec ±0.73% (91 runs sampled)
seidel x 484 ops/sec ±1.07% (88 runs sampled)
seidel is 37% faster than libtess (second fastest) |
Reported two VT issues that currently cause problems when tesselating polygons (regardless of the used library): |
Update regarding degenerate cases — we need to convert polygons to strictly simple polygons (no intersecting edges, no touching edges and vertices) to render features like mapbox/mapnik-vector-tile#53 properly. This is what's causing water/landcover problems in the current triangulate branch, and will cause issue with any other triangulation library as well. We could do this using JS Clipper. Performance shouldn't such a big hit because we could first run triangulation on non-clipped geometry, then try JS Clipper if the first fails (and it will fail only for a few features per tile). The library is pretty huge though (25kb minified and gzipped). It also doesn't guarantee proper conversion in all cases which is stated in the docs. |
I think for fixing degenerate cases, what we should try instead of using clipper is:
|
OK, I mostly done with seidel optimizations — it's now the fastest JS triangulation library by far. typical OSM building (15 vertices):
seidel x 79,031 ops/sec ±0.55% (99 runs sampled)
libtess x 21,810 ops/sec ±0.56% (98 runs sampled)
poly2tri x 28,848 ops/sec ±1.43% (94 runs sampled)
seidel is 174% faster than poly2tri (second fastest)
dude shape (94 vertices):
seidel x 10,201 ops/sec ±0.54% (100 runs sampled)
libtess x 4,566 ops/sec ±1.03% (100 runs sampled)
poly2tri x 3,692 ops/sec ±1.05% (98 runs sampled)
seidel is 123% faster than libtess (second fastest)
monkey (1204 vertices):
seidel x 611 ops/sec ±0.87% (97 runs sampled)
libtess x 337 ops/sec ±2.32% (93 runs sampled)
poly2tri x 254 ops/sec ±0.83% (89 runs sampled)
seidel is 81% faster than libtess (second fastest) |
@mourner this is awesome! Thank you for extracting and publishing the module. Would love to see more competition from @brendankenny too ;) (I probably won't embarrass myself trying.) |
@bcamper yeah! It'll be fun to run earcut on Mapzen libtess benchmarks, and run more tests to see how bad it is on cramped OSM data with lots of self-intersections — I'm afraid it will fail hard in some cases. :) |
@mourner thanks for earcut ! unbelievable, just quick tested it on my geo dataset ( 2256 polygons with 3 328 794 vertexes):
|
@Sumbera what! How can IE11 be 30 times faster than Chrome? |
... measured in [s] as seconds, time taken, not [ops/sec]... so IE11 is 30 times slower. |
Yeah, I meant slower. :) Glad to see that it's working! |
Sorry for spamming again — this is already turning into memoirs... So, I wrote the rings classification code that seems to sorta work for now (takes maybe ~5-20% of the tessellation time), so finally I was able to see how earcut copes with all the VT data on a live map. On high zoom levels, it works wonderfully — no noticeable artifacts in dense city areas at all and very fast, which got me very optimistic at first. But the more you zoom out, the more complex water polygons with lots of holes start to appear, and the more simplification kicks in introducing all sorts of weird self-intersections and degeneracies, and it starts to get really really bad: Spen5 a lot of time on trying to deduce the degenerate cases that break to maybe find a workaround but started to fall into the abyss of madness and despair again. Debugging bad data triangulation is really, really hard. Who knew this task would be so damn difficult to solve, especially when working on this alone. :( By the way, to be fair, even libtess breaks on some cramped data. I'll try to find some examples to provide to @brendankenny which produce garbled triangulation. |
@brendankenny here's the fun libtess result: |
Oh man that really sucks. A simplification algorithm that doesn't produce bad polygons? Perhaps too much work. |
yeah, that's rough :( Errors in our libraries aside, though, the triangulation can only do so much without specific knowledge about the geometry. One of the buildings @bcamper gave me for a Mapzen test is a good example -- open the libtess.js expectations viewer and switch the test geometry to The original libtess description just promises to be robust in the face of degeneracies and to provide a "reasonable tesselation" given the input, which is really the best we can aim for. I think you're right to look to earlier stages of geometry input and processing for a full fix for these issues. |
So how come these bad polygons don't cause problems with normal rasterisation? |
OK, now we're getting somewhere! After hours of crazy debugging and a bunch of earcut point releases fixing different race conditions, I can't seem to find any glaring degeneracies browsing the map on zoom 7+. It handles everything pretty well. The only minor issue is sometimes seeing tiny sliver triangles on the more simplified shapes, like on the image below, but they're hardly noticeable: I also found out what is the issue on lower zoom levels. Turns out it's not triangulation — Mapnik VT returns lots of degenerate clipped squares the size of the tile, which lead to ring classification code thinking that the hole square is water with lots of islands when it's in fact all US/Canada land. Generally ring classification is a trap and we should get mapbox/mapnik-vector-tile#59 as soon as possible. |
Released earcut 1.1.0 today with faster hole handling and more robustness. The library turned out to be pretty awesome. While there's still room for improvement on polygons with lots of points, it's good enough for now. For a typical dense map view of DC mentioned above, triangulation of 6 tiles takes ~50ms. For a lower zoom view with lots of water, 4 tiles take ~390ms, with half of that spent on classifying rings. Once this need is gone, we're ready to merge. |
FYI I managed to optimize Earcut with a clever hashing technique using z-order curve and also much better hole elimination algorithm, so it's now pretty fast even on complex polygons with lots of points (plenty of them on lower zoom levels with water/parks). Latest benchmarks:
|
awesome 'era'cut ! thanks ! |
⇢ #1606 |
We should draw polygons by tesselating them instead of using the stencil buffer trick. Native already does this. 7b35312 is a first attempt that uses libtess.js. Rendering is faster but the tesselation is slow. The buildings layer can take up to 500ms to tesselate.
The text was updated successfully, but these errors were encountered: