Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Help! Not able to render detailed tiles #72

Closed
kmprakashbabu opened this issue Oct 4, 2019 · 15 comments
Closed

Help! Not able to render detailed tiles #72

kmprakashbabu opened this issue Oct 4, 2019 · 15 comments

Comments

@kmprakashbabu
Copy link

Hi,
First, sorry for the long write up.

Thanks for this excellent work. I followed the steps and used planet-190930.osm.pbf to import.

Steps that i followed

  1. docker volume create openstreetmap-data
    2.docker run -v \planet-190930.osm.pb:/data.osm.pbf -v openstreetmap-data:/var/lib/postgresql/10/main overv/openstreetmap-tile-server import
    3.docker run -v \planet-190930.osm.pbf:/data.osm.pbf -v openstreetmap-data:/var/lib/postgresql/10/main overv/openstreetmap-tile-server import

I am able to start the instance and access the tiles. But I see only country outlines. If I zoom in i do not see any detailed tiles rendered. Just the blank. Could you please tell me did I miss something or should I have to perform any other steps?

My Env:
Windows 10 running Docker Desktop, Run Docker CLI and execute above mentioned steps.
I7, 4 cores, 16GB RAM and 500 GB harddisk (only 50 GB free space)

Here is the tail output of import(step 2). I see "155 killed".
Reading in file: /data.osm.pbf
Using PBF parser.
Processing: Node(52910k 348.1k/s) Way(0k 0.00k/s) Relation(0 0.00/s)/run.sh: line 67: 155 Killed sudo -u renderer osm2pgsql -d gis --create --slim -G --hstore --tag-transform-script /home/renderer/src/openstreetmap-carto/openstreetmap-carto.lua --number-processes ${THREADS:-4} ${OSM2PGSQL_EXTRA_ARGS} -S /home/renderer/src/openstreetmap-carto/openstreetmap-carto.style /data.osm.pbf

  • sudo -u postgres psql -d gis -f indexes.sql
    CREATE INDEX
    CREATE INDEX
    CREATE INDEX
    CREATE INDEX
    CREATE INDEX
    CREATE INDEX
    CREATE INDEX
    CREATE INDEX
    CREATE INDEX
    CREATE INDEX
    CREATE INDEX
    CREATE INDEX
    CREATE INDEX
  • service postgresql stop
  • Stopping PostgreSQL 10 database server
    ...done.
  • exit 0

Thanks
Prakash

@Istador
Copy link
Contributor

Istador commented Oct 4, 2019

   2.docker run -v \planet-190930.osm.pb:/data.osm.pbf -v openstreetmap-data:/var/lib/postgresql/10/main overv/openstreetmap-tile-server import

It should be planet-190930.osm.pbf and not planet-190930.osm.pb

Windows 10 running Docker Desktop, Run Docker CLI and execute above mentioned steps.
I7, 4 cores, 16GB RAM and 500 GB harddisk (only 50 GB free space)

50 GB will not be enough space for the full planet import.
The database for Europe alone would be more than 400 GB.

Importing it on a hard disk drive (HDD) instead of an solid state disk (SSD) will take weeks instead of hours.

Processing: Node(52910k 348.1k/s) Way(0k 0.00k/s) Relation(0 0.00/s)/run.sh: line 67: 155 Killed sudo -u renderer osm2pgsql -d gis --create --slim -G --hstore --tag-transform-script /home/renderer/src/openstreetmap-carto/openstreetmap-carto.lua --number-processes ${THREADS:-4} ${OSM2PGSQL_EXTRA_ARGS} -S /home/renderer/src/openstreetmap-carto/openstreetmap-carto.style /data.osm.pbf

/run.sh: line 67: 155 Killed

Meaning the import process was killed / aborted. (Probably a memory problem, see issue 31)

@kmprakashbabu
Copy link
Author

Thank you very much Robin for taking time to reply. I will try with SSD.

@icemagno
Copy link

Same issue here... only a basic contour map with no details.

This is the last log from the IMPORT phase:

Reading in file: /data.osm.pbf
Using PBF parser.
Processing: Node(122371k 518.5k/s) Way(10735k 31.85k/s) Relation(194500 2069.15/s)  parse time: 667s
Node stats: total(122371182), max(6887086633) in 236s
Way stats: total(10735125), max(735328184) in 337s
Relation stats: total(196310), max(10164904) in 94s
Sorting data and creating indexes for planet_osm_point
Sorting data and creating indexes for planet_osm_line
Sorting data and creating indexes for planet_osm_polygon
Sorting data and creating indexes for planet_osm_roads
Copying planet_osm_point to cluster by geometry finished
Creating geometry index on planet_osm_point
Copying planet_osm_roads to cluster by geometry finished
Creating geometry index on planet_osm_roads
Creating osm_id index on planet_osm_roads
Creating indexes on planet_osm_roads finished
All indexes on planet_osm_roads created in 43s
Completed planet_osm_roads
Stopping table: planet_osm_nodes
Stopped table: planet_osm_nodes in 0s
Stopping table: planet_osm_ways
Building index on table: planet_osm_ways
Copying planet_osm_line to cluster by geometry finished
Creating geometry index on planet_osm_line
Creating osm_id index on planet_osm_point
Creating indexes on planet_osm_point finished
All indexes on planet_osm_point created in 116s
Completed planet_osm_point
Stopping table: planet_osm_rels
Building index on table: planet_osm_rels
Stopped table: planet_osm_rels in 8s
Copying planet_osm_polygon to cluster by geometry finished
Creating geometry index on planet_osm_polygon
Creating osm_id index on planet_osm_line
Creating indexes on planet_osm_line finished
All indexes on planet_osm_line created in 232s
Completed planet_osm_line
Creating osm_id index on planet_osm_polygon
Creating indexes on planet_osm_polygon finished
All indexes on planet_osm_polygon created in 295s
Completed planet_osm_polygon
Stopped table: planet_osm_ways in 569s

Osm2pgsql took 1281s overall
node cache: stored: 122371182(100.00%), storage efficiency: 55.01% (dense blocks: 5997, sparse nodes: 86664572), hit rate: 100.00%
+ sudo -u postgres psql -d gis -f indexes.sql
CREATE INDEX
CREATE INDEX
CREATE INDEX
CREATE INDEX
CREATE INDEX
CREATE INDEX
CREATE INDEX
CREATE INDEX
CREATE INDEX
CREATE INDEX
CREATE INDEX
CREATE INDEX
CREATE INDEX
+ touch /var/lib/mod_tile/planet-import-complete
+ service postgresql stop
 * Stopping PostgreSQL 10 database server
   ...done.
+ exit 0

and the log from the RUN phase show nothing wrong... only things like
renderd[145]: Rendering projected coordinates 15 12448 18528 -> -4813698.293290|-2631879.757917 -4803914.353669|-2622095.818296 to a 8 x 8 tile

or

renderd[145]: DEBUG: Got command RenderPrio fd(7) xml(ajt), z(15), x(12449), y(18528), mime(image/png), options()

@Fedorm
Copy link

Fedorm commented Nov 23, 2019

I have the same issue.

CREATE INDEX
touch /var/lib/mod_tile/planet-import-complete
service postgresql stop
Stopping PostgreSQL 10 database server
...done.
exit 0`

@hamzarhaiem
Copy link

Same issue.

+ '[' 1 -ne 1 ']'
+ '[' import = import ']'
+ createPostgresConfig
+ cp /etc/postgresql/12/main/postgresql.custom.conf.tmpl /etc/postgresql/12/main/postgresql.custom.conf
+ sudo -u postgres echo 'autovacuum = on'
+ cat /etc/postgresql/12/main/postgresql.custom.conf
# Suggested minimal settings from
# https://ircama.github.io/osm-carto-tutorials/tile-server-ubuntu/

shared_buffers = 128MB
min_wal_size = 1GB
max_wal_size = 2GB
maintenance_work_mem = 256MB

# Suggested settings from
# https://github.com/openstreetmap/chef/blob/master/roles/tile.rb#L38-L45

max_connections = 250
temp_buffers = 32MB
work_mem = 128MB
wal_buffers = 1024kB
wal_writer_delay = 500ms
commit_delay = 10000
# checkpoint_segments = 60 # unrecognized in psql 10.7.1
max_wal_size = 2880MB
random_page_cost = 1.1
track_activity_query_size = 16384
autovacuum_vacuum_scale_factor = 0.05
autovacuum_analyze_scale_factor = 0.02

listen_addresses = '*'
autovacuum = on
+ service postgresql start
 * Starting PostgreSQL 12 database server
   ...done.
+ sudo -u postgres createuser renderer
+ sudo -u postgres createdb -E UTF8 -O renderer gis
+ sudo -u postgres psql -d gis -c 'CREATE EXTENSION postgis;'
CREATE EXTENSION
+ sudo -u postgres psql -d gis -c 'CREATE EXTENSION hstore;'
CREATE EXTENSION
+ sudo -u postgres psql -d gis -c 'ALTER TABLE geometry_columns OWNER TO renderer;'
ALTER TABLE
+ sudo -u postgres psql -d gis -c 'ALTER TABLE spatial_ref_sys OWNER TO renderer;'
ALTER TABLE
+ setPostgresPassword
+ sudo -u postgres psql -c 'ALTER USER renderer PASSWORD '\''renderer'\'''
ALTER ROLE
+ '[' '!' -f /data.osm.pbf ']'
+ '[' disabled = enabled ']'
+ '[' -f /data.poly ']'
+ sudo -u renderer osm2pgsql -d gis --create --slim -G --hstore --tag-transform-script /home/renderer/src/openstreetmap-carto/openstreetmap-carto.lua --number-processes 4 -S /home/renderer/src/openstreetmap-carto/openstreetmap-carto.style /data.osm.pbf
osm2pgsql version 1.0.0 (64 bit id space)

Allocating memory for dense node cache
Allocating dense node cache in one big chunk
Allocating memory for sparse node cache
Sharing dense sparse
Node-cache: cache=800MB, maxblocks=12800*65536, allocation method=11
Mid: pgsql, cache=800
Setting up table: planet_osm_nodes
Setting up table: planet_osm_ways
Setting up table: planet_osm_rels
Using lua based tag processing pipeline with script /home/renderer/src/openstreetmap-carto/openstreetmap-carto.lua
Using projection SRS 3857 (Spherical Mercator)
Setting up table: planet_osm_point
Setting up table: planet_osm_line
Setting up table: planet_osm_polygon
Setting up table: planet_osm_roads

Reading in file: /data.osm.pbf
Using PBF parser.
Processing: Node(2231k 446.3k/s) Way(297k 33.04k/s) Relation(3380 1126.67/s)  parse time: 17s
Node stats: total(2231640), max(7015473106) in 5s
Way stats: total(297376), max(750282124) in 9s
Relation stats: total(4131), max(10344355) in 3s
Sorting data and creating indexes for planet_osm_point
Sorting data and creating indexes for planet_osm_roads
Sorting data and creating indexes for planet_osm_line
Sorting data and creating indexes for planet_osm_polygon
Copying planet_osm_point to cluster by geometry finished
Creating geometry index on planet_osm_point
Copying planet_osm_roads to cluster by geometry finished
Creating geometry index on planet_osm_roads
Creating osm_id index on planet_osm_roads
Creating indexes on planet_osm_roads finished
All indexes on planet_osm_roads created in 2s
Completed planet_osm_roads
Stopping table: planet_osm_nodes
Stopped table: planet_osm_nodes in 0s
Stopping table: planet_osm_ways
Building index on table: planet_osm_ways
Creating osm_id index on planet_osm_point
Creating indexes on planet_osm_point finished
All indexes on planet_osm_point created in 3s
Completed planet_osm_point
Stopping table: planet_osm_rels
Building index on table: planet_osm_rels
Stopped table: planet_osm_rels in 0s
Copying planet_osm_line to cluster by geometry finished
Creating geometry index on planet_osm_line
Creating osm_id index on planet_osm_line
Creating indexes on planet_osm_line finished
Copying planet_osm_polygon to cluster by geometry finished
Creating geometry index on planet_osm_polygon
All indexes on planet_osm_line created in 5s
Completed planet_osm_line
Creating osm_id index on planet_osm_polygon
Creating indexes on planet_osm_polygon finished
All indexes on planet_osm_polygon created in 6s
Completed planet_osm_polygon
Stopped table: planet_osm_ways in 6s

Osm2pgsql took 26s overall
node cache: stored: 2231640(100.00%), storage efficiency: 49.92% (dense blocks: 2, sparse nodes: 2227050), hit rate: 100.00%
+ sudo -u postgres psql -d gis -f indexes.sql
CREATE INDEX
CREATE INDEX
CREATE INDEX
CREATE INDEX
CREATE INDEX
CREATE INDEX
CREATE INDEX
CREATE INDEX
CREATE INDEX
CREATE INDEX
CREATE INDEX
CREATE INDEX
CREATE INDEX
+ touch /var/lib/mod_tile/planet-import-complete
+ service postgresql stop
 * Stopping PostgreSQL 12 database server
   ...done.
+ exit 0

@fogers777
Copy link

Is there any update on this issue? I'm facing same issue

@Overv
Copy link
Owner

Overv commented Jan 7, 2020

What may be happening is that the tiles just take a very long time to render rather than never rendering at all, especially if you import the whole planet.

@gabrielibis
Copy link

I have the same issue. Anyone solved the problem?

@ruhepuls
Copy link
Contributor

ruhepuls commented Jan 30, 2020

Checkout this issue related to openstreetmap-carto v4.23:

gravitystorm/openstreetmap-carto#3942
gravitystorm/openstreetmap-carto@78e1aef

After adopting the query in mapnik.xml accordingly I finally could render detailed tiles very quicky as well.

@standinga
Copy link

@ruhepuls can you share what exactly should be set in mapnik.xml? I can see only gray tiles. Thanks!

@ruhepuls
Copy link
Contributor

ruhepuls commented Jan 31, 2020

@standinga

For a quick test search in mapnik.xml for

WHERE way && !bbox! AND ("addr:housenumber" IS NOT NULL) OR ("addr:housename" IS NOT NULL) OR ((tags->'addr:unit') IS NOT NULL)

and replace by

WHERE way && !bbox! AND (("addr:housenumber" IS NOT NULL) OR ("addr:housename" IS NOT NULL) OR ((tags->'addr:unit') IS NOT NULL))

Afterwards restart the renderd daemon or container.
I had similiar problems on low zoom levels (europe import), where tile generation was very slow or just stopped.
The adopted query for adress points now only fetches features within the given map extent.

@standinga
Copy link

@ruhepuls Thanks a lot! I will look into it!

@tajchert
Copy link

tajchert commented Mar 6, 2021

I have same/similar issue:

/run.sh: line 87:   301 Killed                  sudo -u renderer osm2pgsql -d gis --create --slim -G --hstore --tag-transform-script /home/renderer/src/openstreetmap-carto/openstreetmap-carto.lua --number-processes ${THREADS:-4} -S /home/renderer/src/openstreetmap-carto/openstreetmap-carto.style /data.osm.pbf ${OSM2PGSQL_EXTRA_ARGS}
+ sudo -u postgres psql -d gis -f indexes.sql
CREATE INDEX
CREATE INDEX
CREATE INDEX
CREATE INDEX
CREATE INDEX
CREATE INDEX
CREATE INDEX
CREATE INDEX
CREATE INDEX
CREATE INDEX
CREATE INDEX
CREATE INDEX
CREATE INDEX
+ touch /var/lib/mod_tile/planet-import-complete
+ service postgresql stop
 * Stopping PostgreSQL 12 database server
   ...done.
+ exit 0

Run:

docker run \
    -e DOWNLOAD_PBF=http://download.geofabrik.de/europe/poland-latest.osm.pbf \
    -e DOWNLOAD_POLY=http://download.geofabrik.de/europe/poland.poly \
    -v /home/ec2-user/osmtileserver/postgresql.custom.conf.tmpl:/etc/postgresql/12/main/postgresql.custom.conf.tmpl \
    -v openstreetmap-data:/var/lib/postgresql/12/main \
    -e ALLOW_CORS=enabled \
    -e "OSM2PGSQL_EXTRA_ARGS=-C 4096" \
    overv/openstreetmap-tile-server \
    import

And my postgresql.custom.conf.tmpl :

shared_buffers = 2GB
min_wal_size = 1GB
max_wal_size = 2GB
maintenance_work_mem = 512MB

max_connections = 300
temp_buffers = 32MB
work_mem = 256MB
wal_buffers = 16MB
wal_writer_delay = 500ms
commit_delay = 10000
# checkpoint_segments = 60 # unrecognized in psql 10.7.1
max_wal_size = 2880MB
random_page_cost = 1.1
track_activity_query_size = 16384
autovacuum_vacuum_scale_factor = 0.05
autovacuum_analyze_scale_factor = 0.02

listen_addresses = '*'

@Istador
Copy link
Contributor

Istador commented Mar 7, 2021

@tajchert Looks like your import process was killed by the oom killer.

What EC2 instance type are you running on (how much memory does it have?) and is there anything else running on that instance?


I'm no expert in tuning postgresql or the osm import process. But here is what I think to be right (but might be wrong).

You set OSM2PGSQL_EXTRA_ARGS=-C 4096 so the import process can consume up to 4 GB of memory for caching.

With max_connections = 300 and work_mem = 256MB the postgres database (not the import process) is allowed to consume up to 300 x 256 MB = 77 GB of memory.

You don't increase THREADS, so the default is 4. So it should be using circa 4 threads x 7 tables = 28 connections using up to 256 MB each. This results in around 28 connections x 256 MB = 7.2 GB memory the database maximally consumes for the import (from the 77 GB that is allowed).

I assume the database workers to be separated over multiple threads/processes of up to 256 MB each, meaning the the import itself with 4 GB is the process consuming the most memory and gets killed.

So, in sum your settings would require a system with at least 12 GB of memory as a minimum for the import (running it with automatic updates would require more, because the amount of possible threads/connections doubles by the update process).


Because you increase shared_buffers = 2GB, you'll likely want to reduce that setting or increase the docker default shm_size of 64 MB by passing an appropriate --shm-size argument to your docker run command.


In my company we decided against running our OSM server in AWS because it would cost too much, but settled on classical server hosting for it (unlike most of our other productive systems that run in AWS).

@tajchert
Copy link

tajchert commented Mar 7, 2021

Thanks.
Yes, I will keep using default one provided as it works, and gradually change values (I used PGTune for the above values, which are way off as I use t3a.large AWS instance with PL map). Also in my case, AWS is a temporary solution due to costs (to found out needed hardware details without upfront costs).
Thanks for explaining many of the values!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests