You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am using xarray to consolidate ~24 pre-existing, moderately large netCDF files into a single zarr store. Each file contains a DataArray with dimensions (channel, time), and no values are nan. Each file's timeseries picks up right where the previous one's left off, making this a perfect use case for out-of-memory file concatenation.
for i, f in enumerate(tqdm(files)):
da = xr.open_dataarray(f) # Open the netCDF file
da = da.chunk({'channel': da.channel.size, 'time': 'auto'}) # Chunk along the time dimension
if i == 0:
da.to_zarr(zarr_file, mode="w")
else:
da.to_zarr(zarr_file, append_dim='time')
da.close()
This always writes the first file correctly, and every other file appends without warning or error, but when I read the resulting zarr store, ~25% of all timepoints (probably, time chunks) derived from files i > 0 are nan.
Admittedly, the above code seems dangerous, since there is no guarantee that da.chunk({'time': 'auto'}) will always return chunks of the same size, even though the files are nearly identical in size, and I don't know what the expected behavior is if the dask chunksizes don't match the chunksizes of the pre-existing zarr store. I checked the docs but didn't find the answer.
Even if the chunksizes always do match, I am not sure what will happen when appending to an existing store. If the last chunk in the store before appending is not a full chunk, will it be "filled in" when new data are appended to the store? Presumably, but this seems like it could cause problems with parallel writing, since the source chunks from a dask array almost certainly won't line up with the new chunks in the zarr store, unless you've been careful to make it so.
In any case, the following change seems to solve the issue, and the zarr store no longer contains nan.
for i, f in enumerate(tqdm(files)):
da = xr.open_dataarray(f) # Open the netCDF file
if i == 0:
da = da.chunk({'channel': da.channel.size, 'time': 'auto'}) # Chunk along the time dimension
da.to_zarr(zarr_file, mode="w")
else:
da.to_zarr(zarr_file, append_dim='time')
da.close()
I didn't file this as a bug, because I was doing something that was a bad idea, but it does seem like to_zarr should have stopped me from doing it in the first place.
The text was updated successfully, but these errors were encountered:
What is your issue?
I am using
xarray
to consolidate ~24 pre-existing, moderately large netCDF files into a single zarr store. Each file contains aDataArray
with dimensions(channel, time)
, and no values arenan
. Each file's timeseries picks up right where the previous one's left off, making this a perfect use case for out-of-memory file concatenation.This always writes the first file correctly, and every other file appends without warning or error, but when I read the resulting zarr store, ~25% of all timepoints (probably, time chunks) derived from files
i > 0
arenan
.Admittedly, the above code seems dangerous, since there is no guarantee that
da.chunk({'time': 'auto'})
will always return chunks of the same size, even though the files are nearly identical in size, and I don't know what the expected behavior is if the dask chunksizes don't match the chunksizes of the pre-existing zarr store. I checked the docs but didn't find the answer.Even if the chunksizes always do match, I am not sure what will happen when appending to an existing store. If the last chunk in the store before appending is not a full chunk, will it be "filled in" when new data are appended to the store? Presumably, but this seems like it could cause problems with parallel writing, since the source chunks from a dask array almost certainly won't line up with the new chunks in the zarr store, unless you've been careful to make it so.
In any case, the following change seems to solve the issue, and the zarr store no longer contains
nan
.I didn't file this as a bug, because I was doing something that was a bad idea, but it does seem like
to_zarr
should have stopped me from doing it in the first place.The text was updated successfully, but these errors were encountered: