You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi Mark et al!
I am working on building some simple automated tools for combining OSM data with animal tracking data for movement ecology problems (e.g., where and when did the chicken cross the road?). Package osmdata is perfect for this!
One constant issue we run into is that query calls are too large if the bounding box (or volume of features requiested) are too big. But these issues are all situation dependent. Is there any tool/method for in advance getting an estimate of query size that can be used to guess if a query will time out / throw an error (or estimate how long it will take)?
Specifically, what I'm interested in knowing is if there are ways to a priori estimate where queries need to be split, made smaller etc or if that is simply something you find out after trying and failing...
SIDENOTE: The query splitting workflow from vignette 4 was not working as expected (for me), but i can explore what was going wrong when i get a bit more time and provide proper feedback there...
Cheers
Jed
The text was updated successfully, but these errors were encountered:
I think that it's not possible to know the runtime of the query beforehand. I would try to build the minimal query that you need (eg. filter for roads if that's the only feature type you are interested in) and add a big timeout
Great to hear from you Jed! My workflows for jobs that are too big for the overpass API is to manually download pbf files from https://geofabrik.de (or with the osmextract package if you like, but use that only to download, not to read into R), use osmium-tool to filter by both geography and key-val pairs, output that to .osm/.xml format, and then read that in with osmdata.
One day i'll find time to wrap osmium C++ code into an R package to enable that workflow within R, but until then, you can also easily set it up via bunches of system calls, like these examples (for linux-only). Feel free to ask further questions.
Hi Mark et al!
I am working on building some simple automated tools for combining OSM data with animal tracking data for movement ecology problems (e.g., where and when did the chicken cross the road?). Package osmdata is perfect for this!
One constant issue we run into is that query calls are too large if the bounding box (or volume of features requiested) are too big. But these issues are all situation dependent. Is there any tool/method for in advance getting an estimate of query size that can be used to guess if a query will time out / throw an error (or estimate how long it will take)?
Specifically, what I'm interested in knowing is if there are ways to a priori estimate where queries need to be split, made smaller etc or if that is simply something you find out after trying and failing...
SIDENOTE: The query splitting workflow from vignette 4 was not working as expected (for me), but i can explore what was going wrong when i get a bit more time and provide proper feedback there...
Cheers
Jed
The text was updated successfully, but these errors were encountered: