Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Default chunking in GeoTIFF images #2093

Closed
mrocklin opened this issue Apr 30, 2018 · 10 comments
Closed

Default chunking in GeoTIFF images #2093

mrocklin opened this issue Apr 30, 2018 · 10 comments
Labels

Comments

@mrocklin
Copy link
Contributor

Given a tiled GeoTIFF image I'm looking for the best practice in reading it as a chunked dataset. I did this in this notebook by first opening the file with rasterio, looking at the block sizes, and then using those to inform the argument to chunks= in xarray.open_rasterio. This works, but is somewhat cumbersome because I also had to dive into the rasterio API. Do we want to provide defaults here?

In dask.array every time this has come up we've always shot it down, automatic chunking is error prone and hard to do well. However in these cases the object we're being given usually also conveys its chunking in a way that matches how dask.array thinks about it, so the extra cognitive load on the user has been somewhat low. Rasterio's model and API feel much more foreign to me though than a project like NetCDF or H5Py. I find myself wanting a chunks=True or chunks='100MB' option.

Thoughts on this? Is this in-scope? If so then what is the right API and what is the right policy for how to make xarray/dask.array chunks larger than GeoTIFF chunks?

@rabernat
Copy link
Contributor

rabernat commented Apr 30, 2018 via email

@mrocklin
Copy link
Contributor Author

My guess is that geotiff chunks will be much smaller than is ideal for dask.array. We might want to expand those chunk sizes by some multiple.

@jhamman
Copy link
Member

jhamman commented Apr 30, 2018

#1440 is related but more focused on netcdf.

@ebo
Copy link

ebo commented Apr 30, 2018

Most of the standard internal chunked (or what I believe to be called 'tiled' by the GIS community) is 256x256 (see: http://www.gdal.org/frmt_gtiff.html TILED=YES BLOCKXSIZE=n and BLOCKYSIZE=n). This is used when viewing images within a given region of interest or window. You can really tell the difference in speed between the tiled and stripped images (which has a blocksize 1x).

@mrocklin, I agree that we might want to aggregate some number of them, but we would need to get some automation up front and sort out how we want to determine the expansion. Adding to the #1440 discussion mentioned, there will likely be advantage in increasing the block sizes in given directions.

@Zac-HD
Copy link
Contributor

Zac-HD commented May 7, 2018

With the benefit of almost a year's worth of procrastination, I think the best approach is to take the heuristics from #1440, but only support chunks=True - if a decent default heuristic isn't good enough, the user can specify exact chunks.

The underlying logic for this issue would be identical to that of #1440, so supporting both is "just" a matter of plumbing it in correctly.

@ebo
Copy link

ebo commented May 7, 2018 via email

@ebo
Copy link

ebo commented Jun 18, 2018

one of the issues related to this has been closed. Has a default GeoTIFF chunk been implemented?

@fmaussion
Copy link
Member

Has a default GeoTIFF chunk been implemented?

No, unfortunately.

@ebo
Copy link

ebo commented Jun 18, 2018 via email

@stale
Copy link

stale bot commented May 19, 2020

In order to maintain a list of currently relevant issues, we mark issues as stale after a period of inactivity

If this issue remains relevant, please comment here or remove the stale label; otherwise it will be marked as closed automatically

@stale stale bot added the stale label May 19, 2020
@stale stale bot closed this as completed Jun 18, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging a pull request may close this issue.

6 participants