-
Notifications
You must be signed in to change notification settings - Fork 237
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Intersects: faster queries for negative case #635
Intersects: faster queries for negative case #635
Conversation
Funnily enough, my use case for cycle.travel is very similar! I have slightly different rendering rules for roads within urban area polygons, so that country roads are prominent but towns aren't too messy. For example, in the road heading east from Tidmarsh becomes narrow and uncased when it enters the urban area. This is done with a set of 180,000ish urban area polygons (for Europe) or 8,000 (for North America).
Given that polygon indexing is a bit of a niche feature anyway, so most users won't see the impact, I think we can get away without adding the knob. But defaulting to min(basezoom,14) sounds sensible. |
This PR decreases the time for me to build my road layer for Colorado from 471 seconds to 11 seconds. I'm not sure if this ought to be the new default, or if it should have a config knob, so hoping to use this PR for discussion. I'm starting to really get my hands dirty with tilemaker, and have come across a performance hotspot with `Intersects` against a layer with many (relatively) small polygons. In systemed#593 (reply in thread), you mentioned: > The Lua methods like FindIntersecting etc. will need some thought here. > They're really useful for region-specific processing so I'd like to keep them, > but they do require keeping those particular geometries in memory. Generally > there won't be too many of them - rough country polygons etc. - so the impact > on memory isn't huge. I think my use differs from the norm a bit. I maintain two GeoJSON datasets that list national parks (50K items) and city parks (150K items) for North America. So instead of ~200 country polygons that densely cover the entire planet, I have ~200,000 park polygons that sparsely cover patches of the planet. When rendering road/path/POIs, I test whether or not they're in a park to write different attributes to my tileset. I'm not very familiar with the R-Tree data structure. Some Googling makes me think that it's expensive to query it, even in the case of a negative result. This PR makes negative queries faster for sparse layers. It maintains a bitmap of your z14 (or whatever your basezoom is) tiles, indicating which might possibly intersect with a shape in the layer. This dramatically speeds up the sparse case, but at some cost: - memory: a z14 layer requires an extra 33MB to store the bitmap - runtime: when ingesting the shapes, we compute the shape's envelope and update the bitmap For dense layers like countries, this will likely not speed things up at all. In fact, I'd expect a small slowdown. It will also use memory, varying by the amount of your basezoom. For z14, it's 33MB. Not a big deal, IMO. If your basezoom was z16, though, it'd be 528MB. Probably a big deal, IMO. That said -- we don't have to index at the basezoom level. We could index at the lower of the basezoom or z14. Or we could introduce a knob in the layer config, like: ``` isIndexed: "sparse" ``` or ``` isIndexed: "dense" ``` that tries to capture whether or not it's worth creating the bitmap.
615846b
to
ebf64a7
Compare
Ha, yeah, that's exactly why I want it, too. Cities are such boring places, but maps seem to be full of them. :) I've rebased against master + clamped the index zoom level to the lesser of z14 and the base zoom. |
Merged - thank you! |
In systemed#635, we introduced a way to avoid calling Boost's R-Tree for `Intersects` when we knew the underlying z14 tile had no shapes in it. This was a perf win (although not as big as was claimed in that PR -- there was a bug, fixed in systemed#641, which was the source of much of the speedup). This improves on that work for the case of large, irregularly shaped polygons. Currently, for each shape in the shapefile/GeoJSON file, we compute the set of z14 tiles in its bounding box, and set a bit in a bitset for each of those tiles. This is fast to compute, but lossy. For example, the bounding box of https://www.openstreetmap.org/relation/9848163#map=12/39.9616/-105.2197 covers much of the city of Boulder, even though the park itself is entirely outside of Boulder. Instead, this PR now uses 2 bits per z14 tile. The loading of shapes is as before: the first bit is set to 1 if the z14 tile is contained in a bounding box of a shape. The second bit is initialized to 0. At `Intersects` time, we'll lazily refine the index, but only for those tiles that are actually being queried. We'll actually check if the z14 tile intersects any of the shapes, and then either set the second bit to 1, or clear the first bit. This doubles the memory requirement for indexing from 32MB to 64MB per layer. I've left some counters in - I'm going to fiddle with some other ideas to see if there are other speedups. I'll remove the counters before making a PR.
In systemed#635, we introduced a way to avoid calling Boost's R-Tree for `Intersects` when we knew the underlying z14 tile had no shapes in it. This was a perf win (although not as big as was claimed in that PR -- there was a bug, fixed in systemed#641, which was the source of much of the speedup). This improves on that work for the case of large, irregularly shaped polygons. Currently, for each shape in the shapefile/GeoJSON file, we compute the set of z14 tiles in its bounding box, and set a bit in a bitset for each of those tiles. This is fast to compute, but lossy. For example, the bounding box of https://www.openstreetmap.org/relation/9848163#map=12/39.9616/-105.2197 covers much of the city of Boulder, even though the park itself is entirely outside of Boulder. Instead, this PR now uses 2 bits per z14 tile. The loading of shapes is as before: the first bit is set to 1 if the z14 tile is contained in a bounding box of a shape. The second bit is initialized to 0. At `Intersects` time, we'll lazily refine the index, but only for those tiles that are actually being queried. We'll actually check if the z14 tile intersects any of the shapes, and then either set the second bit to 1, or clear the first bit. This doubles the memory requirement for indexing from 32MB to 64MB per layer. I've left some counters in - I'm going to fiddle with some other ideas to see if there are other speedups. I'll remove the counters before making a PR. Related: the code previously treated the lats as non-projected values. This was wrong, but didn't affect correctness, since both the indexing and querying code made the same error. This commit changes it to correctly use latp.
This PR decreases the time for me to build my road layer for Colorado from 471 seconds to 11 seconds.
I'm not sure if this ought to be the new default, or if it should have a config knob, so hoping to use this PR for discussion.
I'm starting to really get my hands dirty with tilemaker, and have come across a performance hotspot with
Intersects
against a layer with many (relatively) small polygons.In #593 (reply in thread), you mentioned:
I think my use differs from the norm a bit. I maintain two GeoJSON datasets that list national parks (50K items) and city parks (150K items) for North America.
So instead of ~200 country polygons that densely cover the entire planet, I have ~200,000 park polygons that sparsely cover patches of the planet.
When rendering road/path/POIs, I test whether or not they're in a park to write different attributes to my tileset.
I'm not very familiar with the R-Tree data structure. Some Googling makes me think that it's expensive to query it, even in the case of a negative result.
This PR makes negative queries faster for sparse layers. It maintains a bitmap of your z14 (or whatever your basezoom is) tiles, indicating which might possibly intersect with a shape in the layer.
This dramatically speeds up the sparse case, but at some cost:
For dense layers like countries, this will likely not speed things up at all. In fact, I'd expect a small slowdown. It will also use memory, varying by the amount of your basezoom.
For z14, it's 33MB. Not a big deal, IMO. If your basezoom was z16, though, it'd be 528MB. Probably a big deal, IMO.
That said -- we don't have to index at the basezoom level. We could index at the lower of the basezoom or z14.
Or we could introduce a knob in the layer config, like:
or
that tries to capture whether or not it's worth creating the bitmap.