-
-
Notifications
You must be signed in to change notification settings - Fork 1.1k
How should xarray serialize bytes/unicode strings across Python/netCDF versions? #2059
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Thanks a lot Stephan for writing that up!
This would be my personal opinion here. I you feel like this is something you'd like to provide before the last py2-compatible xarray comes out than I'm fine with it, but it shouldn't have top-priority... |
Currently, the dtype does not seem to roundtrip faithfully. This can be reproduced by inserting the following line in the script above (and adjusting the print statement accordingly) with xr.open_dataset(filename) as ds:
read_dtype = ds['data'].dtype which gives:
Also Is it possible to preserve dtype when persisting xarray Datasets/DataArrays to disk? |
Unfortunately, there is a frustrating disconnect between string data types in NumPy and netCDF. This could be done in principle, but it would require adding our xarray specific convention on top of netCDF. I'm not sure this would be worth it -- we already end up converting np.unicode_ to object dtypes in many operations because we need a string dtype that can support missing values. For reading data from disk, we use object dtype because we don't know the length of the longest string until we actually read the data, so this would be incompatible with lazy loading. |
This may be relevant here, maybe not, but it appears the HDF5 backend is also at odds with all the above serialization. Our internal project's dependencies changed, and that moved the Essentially, netCDF4 files that were round-tripped to a BytesIO (via an HDF5 backend) had unicode strings converted to bytes. I'm not sure whether it was the encoding or decoding part, likely decoding, judging by the docs: https://docs.h5py.org/en/stable/strings.html This might require even more special-casing to achieve consistent behavior for xarray users who don't really want to go into backend details (like me 😋). |
@NowanIlfideme h5py 3 changes with regard to strings is tracked also in #4570 |
netCDF string types
We have several options for storing strings in netCDF files:
NC_CHAR
: netCDF's legacy character type. The closest match is NumPy'S1'
dtype. In principle, it's supposed to be able to store arbitrary bytes. On HDF5, it uses an UTF-8 encoded string with a fixed-size of 1 (but note that HDF5 does not complain about storing arbitrary bytes).NC_STRING
: netCDF's newer variable length string type. It's only available on netCDF4 (not netCDF3). It corresponds to an HDF5 variable-length string with UTF-8 encoding.NC_CHAR
with an_Encoding
attribute: xarray and netCDF4-Python support an ad-hoc convention for storing unicode strings inNC_CHAR
data-types, by adding an attribute{'_Encoding': 'UTF-8'}
. The data is still stored as fixed width strings, but xarray (and netCDF4-Python) can decode them as unicode.NC_STRING
would seem like a clear win in cases where it's supported, but as @crusaderky points out in #2040, it actually results in much larger netCDF files in many cases than using character arrays, which are more easily compressed. Nonetheless, we currently default to storing unicode strings inNC_STRING
, because it's the most portable option -- every tool that handles HDF5 and netCDF4 should be able to read it properly as unicode strings.NumPy/Python string types
On the Python side, our options are perhaps even more confusing:
dtype=np.string_
corresponds to fixed-length bytes. This is the default dtype for strings on Python 2, because on Python 2 strings are the same as bytes.dtype=np.unicode_
corresponds to fixed-length unicode. This is the default dtype for strings on Python 3, because on Python 3 strings are the same as unicode.dtype=np.object_
, as arrays of eitherbytes
orunicode
objects. This is a pragmatic choice, because otherwise NumPy has no support for variable length strings. We also use this (like pandas) to mark missing values withnp.nan
.Like pandas, we are pretty liberal with converting back and forth between fixed-length (
np.string
/np.unicode_
) and variable-length (object dtype) representations of strings as necessary. This works pretty well, though converting from object arrays in particular has downsides, since it cannot be done lazily with dask.Current behavior of xarray
Currently, xarray uses the same behavior on Python 2/3. The priority was faithfully round-tripping data from a particular version of Python to netCDF and back, which the current serialization behavior achieves:
This can also be selected explicitly for most data-types by setting dtype in encoding:
'S1'
for NC_CHAR (with or without encoding)str
for NC_STRING (though I'm not 100% sure it works properly currently when given bytes)Script for generating table:
Potential alternatives
The main option I'm considering is switching to default to
NC_CHAR
with UTF-8 encoding for np.string_ / str and object bytes/str on Python 2. The current behavior could be explicitly toggled by setting an encoding of{'_Encoding': None}
.This would imply two changes:
_Encoding
.This implicit conversion would be consistent with Python 2's general handling of bytes/unicode, and facilitate reading netCDF files on Python 3 that were written with Python 2.
The counter-argument is that it may not be worth changing this at this late point, given that we will be sunsetting Python 2 support by year's end.
The text was updated successfully, but these errors were encountered: