Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add --generate-variable-depth-tile-pyramid option #251

Merged
merged 44 commits into from
Aug 6, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
44 commits
Select commit Hold shift + click to select a range
22a1e27
Track output position at the file level instead of within each tile
e-n-f May 31, 2024
6e5d714
Track file position where the child tile data begins
e-n-f May 31, 2024
b07509b
Add option and document its intended behavior
e-n-f May 31, 2024
744e8b6
Changing the detail loop to account for stopping early
e-n-f Jun 20, 2024
7057c2b
I forgot I already added an option for this
e-n-f Jun 20, 2024
3c07213
Stop early if we can make a complete tile
e-n-f Jun 20, 2024
30a1655
Add a test of zoom truncation with limited feature count
e-n-f Jun 21, 2024
cf4f912
Forgot to commit the actual code change
e-n-f Jun 21, 2024
049c24c
Make room for a vertex count in the header of each serialized tile
e-n-f Jun 22, 2024
81436a7
Estimate tile complexity; don't try truncating when unlikely to work
e-n-f Jun 24, 2024
bedfa8a
Be more conservative, because ever retrying a tile is a big speed hit
e-n-f Jun 24, 2024
830885f
If stopping early, don't simplify or clean; leave that to overzoom
e-n-f Jun 24, 2024
d4efb4d
Add tiny polygon reduction / dust to overzoom
e-n-f Jun 24, 2024
8f3819d
Don't try to stop early in the children if we dropped anything by rate
e-n-f Jun 25, 2024
6489aa7
Fflush here too before pwriting
e-n-f Jun 25, 2024
33a4656
Don't stop early if we ended up dropping any features.
e-n-f Jun 25, 2024
03ef561
Fix warning
e-n-f Jun 26, 2024
43e5b61
Fix warnings
e-n-f Jun 26, 2024
6a6ec0c
Oops, checking for the wrong expected return value
e-n-f Jun 26, 2024
eb72311
Cleanup from adding line simplification in overzoom
e-n-f Jul 30, 2024
04d7de7
Current (wrong) behavior when combining coalescing and truncating
e-n-f Jul 31, 2024
040b638
Keep a list of parent tiles to skip rather than truncating
e-n-f Jul 31, 2024
527a764
Now the coalesced tiles in z12 get children in z13
e-n-f Jul 31, 2024
aec0d26
Don't double-count feature dropping when the zoom level is retried
e-n-f Aug 1, 2024
2b5e518
Correct README description
e-n-f Aug 1, 2024
dc3dad2
Remove todo about special case below basezoom, which is accounted for
e-n-f Aug 1, 2024
893489a
Be a little more aggressive in drop-densest determination
e-n-f Aug 1, 2024
4eb41a1
Scale tile feature limit for megatiles in the same way as byte limit
e-n-f Aug 1, 2024
0698aeb
Fully deprecate -detect-shared-borders into an alias
e-n-f Aug 2, 2024
e5361f8
Track the distances found in the douglas-peucker recursion
e-n-f Aug 2, 2024
753f1b7
Serialize and deserialize the distance with the vertices
e-n-f Aug 2, 2024
a9445d2
Revert "Serialize and deserialize the distance with the vertices"
e-n-f Aug 5, 2024
ac11755
Revert "Track the distances found in the douglas-peucker recursion"
e-n-f Aug 5, 2024
a3e4d3c
Revert "Fully deprecate -detect-shared-borders into an alias"
e-n-f Aug 5, 2024
53ef0a9
Better tracking of whether we failed to make a full-detail tile
e-n-f Aug 5, 2024
60ad06a
Put a bloom filter in front of the binary search for shared nodes
e-n-f Aug 5, 2024
96f33d2
Forgot to take out this printf
e-n-f Aug 5, 2024
8b46199
Improve dispatch of tiling tasks
e-n-f Aug 5, 2024
3e8c8e2
Still dispatch the biggest tasks first
e-n-f Aug 5, 2024
b329489
Track zoom truncation in the strategies list in the tileset metadata
e-n-f Aug 5, 2024
d1d8238
Prescan for small deltas before doing proper simplification
e-n-f Aug 6, 2024
82969ee
Revert "Prescan for small deltas before doing proper simplification"
e-n-f Aug 6, 2024
8a7da22
Update version and changelog
e-n-f Aug 6, 2024
d3b5c51
Rename to --generate-variable-depth-tile-pyramid
e-n-f Aug 6, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 8 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,11 @@
# 2.58.0

* Add --generate-variable-depth-tile-pyramid option
* Add --line-simplification and --tiny-polygon-size options to tippecanoe-overzoom
* Adjust tile feature limit for --retain-points-multiplier
* Tune convergence rate for --coalesce-densest and --drop-densest
* Fix overreported drop and coalesce counts in strategies

# 2.57.0

* Add multi-tile input to tippecanoe-overzoom
Expand Down
5 changes: 5 additions & 0 deletions Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -355,6 +355,11 @@ overzoom-test: tippecanoe-overzoom
./tippecanoe-decode tests/pbf/12-2145-1391-filter2.pbf 12 2145 1391 > tests/pbf/12-2145-1391-filter2.pbf.json.check
cmp tests/pbf/12-2145-1391-filter2.pbf.json.check tests/pbf/12-2145-1391-filter2.pbf.json
rm tests/pbf/12-2145-1391-filter2.pbf.json.check tests/pbf/12-2145-1391-filter2.pbf
# Tiny polygon reduction
./tippecanoe-overzoom --line-simplification=5 --tiny-polygon-size=50 -o tests/pbf/countries-0-0-0.pbf.out tests/pbf/countries-0-0-0.pbf 0/0/0 0/0/0
./tippecanoe-decode tests/pbf/countries-0-0-0.pbf.out 0 0 0 > tests/pbf/countries-0-0-0.pbf.out.json.check
cmp tests/pbf/countries-0-0-0.pbf.out.json.check tests/pbf/countries-0-0-0.pbf.out.json
rm tests/pbf/countries-0-0-0.pbf.out tests/pbf/countries-0-0-0.pbf.out.json.check

join-test: tippecanoe tippecanoe-decode tile-join
./tippecanoe -q -f -z12 -o tests/join-population/tabblock_06001420.mbtiles -YALAND10:'Land area' -L'{"file": "tests/join-population/tabblock_06001420.json", "description": "population"}'
Expand Down
1 change: 1 addition & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -352,6 +352,7 @@ Parallel processing will also be automatic if the input file is in FlatGeobuf fo
specified maximum zoom and to any levels added beyond that.
* `--extend-zooms-if-still-dropping-maximum=`_count_: Increase the maxzoom if features are still being dropped at that zoom level
by up to _count_ zoom levels.
* `-at` or `--generate-variable-depth-tile-pyramid`: Don't produce child tiles for any tile that should be sufficient to be overzoomed to any higher zoom level. Such tiles will be produced with maximum detail and no simplification or polygon cleaning. Tiles with point features below the basezoom or where any features have to be dropped dynamically, or which contain too many features or bytes with full detail, will be written out with normal detail and split into child tiles. Tilesets generated with this option are suitable for use only with tile servers that will find the appropriate tile to overzoom from and will simplify and clean the geometries appropriately before serving the tile.
* `-R` _zoom_`/`_x_`/`_y_ or `--one-tile=`_zoom_`/`_x_`/`_y_: Set the minzoom and maxzoom to _zoom_ and produce only
the single specified tile at that zoom level.

Expand Down
294 changes: 289 additions & 5 deletions clip.cpp
Original file line number Diff line number Diff line change
@@ -1,3 +1,4 @@
#include <stack>
#include <stdlib.h>
#include <mapbox/geometry/point.hpp>
#include <mapbox/geometry/multi_polygon.hpp>
Expand Down Expand Up @@ -340,7 +341,7 @@ drawvec clean_or_clip_poly(drawvec &geom, int z, int buffer, bool clip, bool try
if (k != i) {
fprintf(f, ",");
}
fprintf(f, "[%lld,%lld]", geom[k].x, geom[k].y);
fprintf(f, "[%lld,%lld]", (long long) geom[k].x, (long long) geom[k].y);
}

fprintf(f, "]");
Expand Down Expand Up @@ -755,10 +756,274 @@ static std::vector<std::pair<double, double>> clip_poly1(std::vector<std::pair<d
return out;
}

double distance_from_line(long long point_x, long long point_y, long long segA_x, long long segA_y, long long segB_x, long long segB_y) {
long long p2x = segB_x - segA_x;
long long p2y = segB_y - segA_y;

// These calculations must be made in integers instead of floating point
// to make them consistent between x86 and arm floating point implementations.
//
// Coordinates may be up to 34 bits, so their product is up to 68 bits,
// making their sum up to 69 bits. Downshift before multiplying to keep them in range.
double something = ((p2x / 4) * (p2x / 8) + (p2y / 4) * (p2y / 8)) * 32.0;
// likewise
double u = (0 == something) ? 0 : ((point_x - segA_x) / 4 * (p2x / 8) + (point_y - segA_y) / 4 * (p2y / 8)) * 32.0 / (something);

if (u >= 1) {
u = 1;
} else if (u <= 0) {
u = 0;
}

double x = segA_x + u * p2x;
double y = segA_y + u * p2y;

double dx = x - point_x;
double dy = y - point_y;

double out = std::round(sqrt(dx * dx + dy * dy) * 16.0) / 16.0;
return out;
}

// https://github.com/Project-OSRM/osrm-backend/blob/733d1384a40f/Algorithms/DouglasePeucker.cpp
void douglas_peucker(drawvec &geom, int start, int n, double e, size_t kept, size_t retain, bool prevent_simplify_shared_nodes) {
std::stack<int> recursion_stack;

if (!geom[start + 0].necessary || !geom[start + n - 1].necessary) {
fprintf(stderr, "endpoints not marked necessary\n");
exit(EXIT_IMPOSSIBLE);
}

int prev = 0;
for (int here = 1; here < n; here++) {
if (geom[start + here].necessary) {
recursion_stack.push(prev);
recursion_stack.push(here);
prev = here;

if (prevent_simplify_shared_nodes) {
if (retain > 0) {
retain--;
}
}
}
}
// These segments are put on the stack from start to end,
// independent of winding, so note that anything that uses
// "retain" to force it to keep at least N points will
// keep a different set of points when wound one way than
// when wound the other way.

while (!recursion_stack.empty()) {
// pop next element
int second = recursion_stack.top();
recursion_stack.pop();
int first = recursion_stack.top();
recursion_stack.pop();

double max_distance = -1;
int farthest_element_index;

// find index idx of element with max_distance
int i;
if (geom[start + first] < geom[start + second]) {
farthest_element_index = first;
for (i = first + 1; i < second; i++) {
double temp_dist = distance_from_line(geom[start + i].x, geom[start + i].y, geom[start + first].x, geom[start + first].y, geom[start + second].x, geom[start + second].y);

double distance = std::fabs(temp_dist);

if ((distance > e || kept < retain) && (distance > max_distance || (distance == max_distance && geom[start + i] < geom[start + farthest_element_index]))) {
farthest_element_index = i;
max_distance = distance;
}
}
} else {
farthest_element_index = second;
for (i = second - 1; i > first; i--) {
double temp_dist = distance_from_line(geom[start + i].x, geom[start + i].y, geom[start + second].x, geom[start + second].y, geom[start + first].x, geom[start + first].y);

double distance = std::fabs(temp_dist);

if ((distance > e || kept < retain) && (distance > max_distance || (distance == max_distance && geom[start + i] < geom[start + farthest_element_index]))) {
farthest_element_index = i;
max_distance = distance;
}
}
}

if (max_distance >= 0) {
// mark idx as necessary
geom[start + farthest_element_index].necessary = 1;
kept++;

if (geom[start + first] < geom[start + second]) {
if (1 < farthest_element_index - first) {
recursion_stack.push(first);
recursion_stack.push(farthest_element_index);
}
if (1 < second - farthest_element_index) {
recursion_stack.push(farthest_element_index);
recursion_stack.push(second);
}
} else {
if (1 < second - farthest_element_index) {
recursion_stack.push(farthest_element_index);
recursion_stack.push(second);
}
if (1 < farthest_element_index - first) {
recursion_stack.push(first);
recursion_stack.push(farthest_element_index);
}
}
}
}
}

// cut-down version of simplify_lines(), not dealing with shared node preservation
static drawvec simplify_lines_basic(drawvec &geom, int z, int detail, double simplification, size_t retain) {
int res = 1 << (32 - detail - z);

for (size_t i = 0; i < geom.size(); i++) {
if (geom[i].op == VT_MOVETO) {
geom[i].necessary = 1;
} else if (geom[i].op == VT_LINETO) {
geom[i].necessary = 0;
// if this is actually the endpoint, not an intermediate point,
// it will be marked as necessary below
} else {
geom[i].necessary = 1;
}
}

for (size_t i = 0; i < geom.size(); i++) {
if (geom[i].op == VT_MOVETO) {
size_t j;
for (j = i + 1; j < geom.size(); j++) {
if (geom[j].op != VT_LINETO) {
break;
}
}

geom[i].necessary = 1;
geom[j - 1].necessary = 1;

if (j - i > 1) {
douglas_peucker(geom, i, j - i, res * simplification, 2, retain, false);
}
i = j - 1;
}
}

size_t out = 0;
for (size_t i = 0; i < geom.size(); i++) {
if (geom[i].necessary) {
geom[out++] = geom[i];
}
}
geom.resize(out);
return geom;
}

drawvec reduce_tiny_poly(drawvec const &geom, int z, int detail, bool *still_needs_simplification, bool *reduced_away, double *accum_area, double tiny_polygon_size) {
drawvec out;
const double pixel = (1LL << (32 - detail - z)) * (double) tiny_polygon_size;

bool included_last_outer = false;
*still_needs_simplification = false;
*reduced_away = false;

for (size_t i = 0; i < geom.size(); i++) {
if (geom[i].op == VT_MOVETO) {
size_t j;
for (j = i + 1; j < geom.size(); j++) {
if (geom[j].op != VT_LINETO) {
break;
}
}

double area = get_area(geom, i, j);

// XXX There is an ambiguity here: If the area of a ring is 0 and it is followed by holes,
// we don't know whether the area-0 ring was a hole too or whether it was the outer ring
// that these subsequent holes are somehow being subtracted from. I hope that if a polygon
// was simplified down to nothing, its holes also became nothing.

if (area != 0) {
// These are pixel coordinates, so area > 0 for the outer ring.
// If the outer ring of a polygon was reduced to a pixel, its
// inner rings must just have their area de-accumulated rather
// than being drawn since we don't really know where they are.

// i.e., this outer ring is small enough that we are including it
// in a tiny polygon rather than letting it represent itself,
// OR it is an inner ring and we haven't output an outer ring for it to be
// cut out of, so we are just subtracting its area from the tiny polygon
// rather than trying to deal with it geometrically
if ((area > 0 && area <= pixel * pixel) || (area < 0 && !included_last_outer)) {
*accum_area += area;
*reduced_away = true;

if (area > 0 && *accum_area > pixel * pixel) {
// XXX use centroid;

out.emplace_back(VT_MOVETO, geom[i].x - pixel / 2, geom[i].y - pixel / 2);
out.emplace_back(VT_LINETO, geom[i].x - pixel / 2 + pixel, geom[i].y - pixel / 2);
out.emplace_back(VT_LINETO, geom[i].x - pixel / 2 + pixel, geom[i].y - pixel / 2 + pixel);
out.emplace_back(VT_LINETO, geom[i].x - pixel / 2, geom[i].y - pixel / 2 + pixel);
out.emplace_back(VT_LINETO, geom[i].x - pixel / 2, geom[i].y - pixel / 2);

*accum_area -= pixel * pixel;
}

if (area > 0) {
included_last_outer = false;
}
}
// i.e., this ring is large enough that it gets to represent itself
// or it is a tiny hole out of a real polygon, which we are still treating
// as a real geometry because otherwise we can accumulate enough tiny holes
// that we will drop the next several outer rings getting back up to 0.
else {
for (size_t k = i; k < j && k < geom.size(); k++) {
out.push_back(geom[k]);
}

// which means that the overall polygon has a real geometry,
// which means that it gets to be simplified.
*still_needs_simplification = true;

if (area > 0) {
included_last_outer = true;
}
}
} else {
// area is 0: doesn't count as either having been reduced away,
// since it was probably just degenerate from having been clipped,
// or as needing simplification, since it produces no output.
}

i = j - 1;
} else {
fprintf(stderr, "how did we get here with %d in %d?\n", geom[i].op, (int) geom.size());

for (size_t n = 0; n < geom.size(); n++) {
fprintf(stderr, "%d/%lld/%lld ", geom[n].op, (long long) geom[n].x, (long long) geom[n].y);
}
fprintf(stderr, "\n");

out.push_back(geom[i]);
}
}

return out;
}

std::string overzoom(std::vector<input_tile> const &tiles, int nz, int nx, int ny,
int detail, int buffer, std::set<std::string> const &keep, bool do_compress,
std::vector<std::pair<unsigned, unsigned>> *next_overzoomed_tiles,
bool demultiply, json_object *filter, bool preserve_input_order, std::unordered_map<std::string, attribute_op> const &attribute_accum, std::vector<std::string> const &unidecode_data) {
bool demultiply, json_object *filter, bool preserve_input_order, std::unordered_map<std::string, attribute_op> const &attribute_accum, std::vector<std::string> const &unidecode_data, double simplification,
double tiny_polygon_size) {
std::vector<source_tile> decoded;

for (auto const &t : tiles) {
Expand All @@ -784,7 +1049,7 @@ std::string overzoom(std::vector<input_tile> const &tiles, int nz, int nx, int n
decoded.push_back(out);
}

return overzoom(decoded, nz, nx, ny, detail, buffer, keep, do_compress, next_overzoomed_tiles, demultiply, filter, preserve_input_order, attribute_accum, unidecode_data);
return overzoom(decoded, nz, nx, ny, detail, buffer, keep, do_compress, next_overzoomed_tiles, demultiply, filter, preserve_input_order, attribute_accum, unidecode_data, simplification, tiny_polygon_size);
}

struct tile_feature {
Expand Down Expand Up @@ -885,7 +1150,8 @@ static struct preservecmp {
std::string overzoom(std::vector<source_tile> const &tiles, int nz, int nx, int ny,
int detail, int buffer, std::set<std::string> const &keep, bool do_compress,
std::vector<std::pair<unsigned, unsigned>> *next_overzoomed_tiles,
bool demultiply, json_object *filter, bool preserve_input_order, std::unordered_map<std::string, attribute_op> const &attribute_accum, std::vector<std::string> const &unidecode_data) {
bool demultiply, json_object *filter, bool preserve_input_order, std::unordered_map<std::string, attribute_op> const &attribute_accum, std::vector<std::string> const &unidecode_data, double simplification,
double tiny_polygon_size) {
mvt_tile outtile;
std::shared_ptr<std::string> tile_stringpool = std::make_shared<std::string>();

Expand Down Expand Up @@ -916,6 +1182,7 @@ std::string overzoom(std::vector<source_tile> const &tiles, int nz, int nx, int
}

std::vector<tile_feature> pending_tile_features;
double accum_area = 0;

static const std::string retain_points_multiplier_first = "tippecanoe:retain_points_multiplier_first";
static const std::string retain_points_multiplier_sequence = "tippecanoe:retain_points_multiplier_sequence";
Expand Down Expand Up @@ -1014,6 +1281,23 @@ std::string overzoom(std::vector<source_tile> const &tiles, int nz, int nx, int
}
}

bool still_need_simplification_after_reduction = false;
if (t == VT_POLYGON && tiny_polygon_size > 0) {
bool simplified_away_by_reduction = false;

geom = reduce_tiny_poly(geom, nz, detail, &still_need_simplification_after_reduction, &simplified_away_by_reduction, &accum_area, tiny_polygon_size);
} else {
still_need_simplification_after_reduction = true;
}

if (simplification > 0 && still_need_simplification_after_reduction) {
if (t == VT_POLYGON) {
geom = simplify_lines_basic(geom, nz, detail, simplification, 4);
} else if (t == VT_LINE) {
geom = simplify_lines_basic(geom, nz, detail, simplification, 0);
}
}

// Scale to output tile extent

to_tile_scale(geom, nz, det);
Expand Down Expand Up @@ -1077,7 +1361,7 @@ std::string overzoom(std::vector<source_tile> const &tiles, int nz, int nx, int
std::string child = overzoom(sts,
nz + 1, nx * 2 + x, ny * 2 + y,
detail, buffer, keep, false, NULL,
demultiply, filter, preserve_input_order, attribute_accum, unidecode_data);
demultiply, filter, preserve_input_order, attribute_accum, unidecode_data, simplification, tiny_polygon_size);
if (child.size() > 0) {
next_overzoomed_tiles->emplace_back(nx * 2 + x, ny * 2 + y);
}
Expand Down
Loading
Loading