Description
I have several repositories corresponding to different time periods of time.
Each repository contains several netCDF files with different variables.
Here is a simplified example: path/period1/
contains varsABC_period1.nc
and varsDEF_period1.nc
, and path/period2/
contains varsABC_period2.nc
and varsDEF_period2.nc
. All variables have the dimension T (time), but the other dimensions can be different (e.g., A has dimensions [T,X1,Y1], B has dimensions [T,X1,Y2], ...).
Before version v0.11.1 a was able to easily open a dataset as follow:
ds=xr.open_mfdataset('path\period*\*.nc', concat_dim='T')
However, open_mfdataset
now fails. It actually takes forever without returning any output, so my guess is that it's just very very slow.
I've noticed that I'm able to create the dataset concatenating the files with the same variables first, then merging them:
dsABC=xr.open_mfdataset('path\period*\varsABC*.nc', concat_dim='T')
dsDEF=xr.open_mfdataset('path\period*\varsDEF*.nc', concat_dim='T')
ds=xr.merge([dsABC, dsDEF])
I can't find what's the update in v0.11.1 that is causing this behavior. Any idea?
Output of xr.show_versions()
xarray: 0.11.2
pandas: 0.23.4
numpy: 1.14.3
scipy: 1.1.0
netCDF4: 1.4.2
pydap: None
h5netcdf: None
h5py: 2.9.0
Nio: None
zarr: None
cftime: 1.0.3.4
PseudonetCDF: None
rasterio: None
cfgrib: None
iris: None
bottleneck: 1.2.1
cyordereddict: None
dask: 1.0.0
distributed: 1.25.2
matplotlib: 3.0.0
cartopy: None
seaborn: 0.9.0
setuptools: 40.4.3
pip: 18.1
conda: 4.5.12
pytest: 3.8.2
IPython: 7.0.1
sphinx: 1.8.1