Description
Code Sample, a copy-pastable example if possible
A "Minimal, Complete and Verifiable Example" will make it much easier for maintainers to help you:
http://matthewrocklin.com/blog/work/2018/02/28/minimal-bug-reports
import xarray as xr
from netCDF4 import Dataset
def write_netcdf(filename,zlib,least_significant_digit,data,dtype='f4',shuffle=False,contiguous=False,\
chunksizes=None,complevel=6,fletcher32=False):
file = Dataset(filename,'w')
file.createDimension('n', 1)
foo = file.createVariable('data',\
dtype,('n'),zlib=zlib,least_significant_digit=least_significant_digit,\
shuffle=shuffle,contiguous=contiguous,complevel=complevel,fletcher32=fletcher32,chunksizes=chunksizes)
foo[:] = data
file.close()
write_netcdf("mydatafile.nc",True,None,0.0,shuffle=True, chunksizes=(1,))
data = xr.open_dataset('mydatafile.nc')
arr = data['data']
arr[0].to_netcdf('mytestfile.nc', mode='w', engine='h5netcdf')
Problem description
The above example crashes with a TypeError since xarray 0.10.4 (works before, hence reporting the error here and not in eg. h5netcdf):
TypeError: Scalar datasets don't support chunk/filter options
The problem here is that it is not anymore possible to squeeze an array that comes from a netcdf file that was compressed or filtered.
Expected Output
The expected output is that the creation of the trimmed netcdf file works.
Output of xr.show_versions()
INSTALLED VERSIONS
commit: None
python: 3.6.6.final.0
python-bits: 64
OS: Linux
OS-release: 3.10.0-957.el7.x86_64
machine: x86_64
processor: x86_64
byteorder: little
LC_ALL: None
LANG: en_GB.UTF-8
LOCALE: en_GB.UTF-8
xarray: 0.11.0
pandas: 0.23.4
numpy: 1.15.4
scipy: 1.1.0
netCDF4: 1.3.1
h5netcdf: 0.6.2
h5py: 2.8.0
Nio: None
zarr: None
cftime: None
PseudonetCDF: None
rasterio: 1.0.2
iris: None
bottleneck: 1.2.1
cyordereddict: None
dask: 0.20.2
distributed: None
matplotlib: 3.0.0
cartopy: 0.16.0
seaborn: None
setuptools: 40.5.0
pip: 9.0.3
conda: None
pytest: None
IPython: 6.2.1
sphinx: 1.8.1