-
-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Opening dataset without loading any indexes? #6633
Comments
Early versions of Xarray used to have lazy loading of data for indexes, but we removed this for the sake of simplicity. In principle we could restore lazy indexes, but another option (post explicit index refactor) might be an option for opening a dataset without creating indexes for 1D coordinates along dimensions. Another way to solve this sort of challenges might be to load index data in parallel when using Dask. Right now I believe the data corresponding to indexes is always loaded eagerly, without using Dask. All that said -- Do you have a specific example where this has been problematic? In my experience it has been pretty reasonable to use xarray.Dataset objects for schema-like templates, even with index data needing to be loaded eagerly. Possibly another Zarr chunking scheme for your index data could be more efficient? |
It might indeed be worth considering this case too in #6392. Maybe |
Thanks for replying both.
I'll have to defer to the others I tagged for the gory details. Perhaps one of them can cross-link to the specific issue they were having?
I would probably do |
+1 this syntax makes sense to me! |
Here is an example that really highlights the performance cost of always loading dimension coordinates: import zarr
store = zarr.storage.FSStore("s3://mur-sst/zarr/", anon=True)
%time list(zarr.open_consolidated(store)) # -> Wall time: 86.4 ms
%time ds = xr.open_dataset(store, engine='zarr') # -> Wall time: 17.1 s
zgroup = zarr.open_consolidated(store)
%time _ = zgroup['time'][:] # -> Wall time: 14.7 s Obviously this example is pretty extreme. There are things that could be done to optimize it, etc. But it really highlights the costs of eagerly loading dimension coordinates. If I don't care about label-based indexing for this dataset, I would rather have my 17s back! 👍 to " |
Looking at this mur-sst dataset in particular, it stores time in chunks of size 5. That means fetching the 6443 time values requires 1288 separate HTTP requests -- no wonder it's so slow! If the time axis were instead stored in a single chunk of 51 KB, Xarray would only need 3 small size HTTP requests to load the lat, lon and time indexes, which would probably complete in a fraction of a second. That said, I agree that this would be nice to have in general. |
Yes it is definitely a pathological example. 💣 But the fact remains that there are many cases where we just want to discover dataset contents as quickly as possible and want to avoid the cost of loading coordinates and creating indexes. |
This would also fix #2233 |
Here's one from @lsetiawan that can't be opened because it has a 75GB import zarr
zarr.open_group(
"s3://ooi-data/RS03ECAL-MJ03E-06-BOTPTA302-streamed-botpt_nano_sample",
mode="r",
storage_options=dict(anon=True),
) |
Stumbled into this issue while experimenting with page_buf_size on the h5netcdf backend and looking for ways to get XArray closer to the speed of h5py for loading variables when coordinates are too much baggage. As an alternative to #8051, I would like to submit for consideration an "open_variable" method as a fast path from a store to an xarray.Variable (or a Mapping if given either a list of variables in the store or None for all variables). |
Is your feature request related to a problem?
Within pangeo-forge's internals we would like to call
open_dataset
, thento_dict()
, and end up with a schema-like representation of the contents of the dataset. This works, but it also has the side-effect of loading all indexes into memory, even if we are loading the data values "lazily".Describe the solution you'd like
@benbovy do you think it would be possible to (perhaps optionally) also avoid loading indexes upon opening a dataset, so that we actually don't load anything? The end result would act a bit like
ncdump
does.Describe alternatives you've considered
Otherwise we might have to try using xarray-schema or something but the suggestion here would be much neater and more flexible.
xref: pangeo-forge/pangeo-forge-recipes#256
cc @rabernat @jhamman @cisaacstern
The text was updated successfully, but these errors were encountered: