-
Notifications
You must be signed in to change notification settings - Fork 263
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Problem writing big-endian data to existing file #1802
Comments
So recreating this, I am relieved that it doesn't appear the issue is storing the data, but rather in how |
Ok, still sorting through this, making a comment so it does not shuffle down the stack. I'm seeing largely consistent behavior between BE and LE systems, but I have found some differences. I need to sort out the intent (endianness in memory vs. in storage) so I know which behavior is expected. Work is ongoing (on this and so many other things O_o). |
Writing big-endian data into an existing netCDF4 (on a little-endian system) does not seem to properly swap the bytes.
The following example from Unidata/netcdf4-python#1033 (comment) triggers the problem:
The resulting file has the data:
where the values should be 0,1,2,3,4,5,6,7,8,9. This was tested with netcdf version 4.7.3 on an Ubuntu 20.04 system, and also the latest development snapshot (b9bb44f) was tested with the same result.
If the intermediate
nc_close
andnc_open
calls are commented out (i.e. still writing into a new file), then the data is written correctly.The text was updated successfully, but these errors were encountered: