Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

nc_test4 failed when using intel compilers #876

Closed
4 of 12 tasks
dongli opened this issue Feb 21, 2018 · 10 comments · Fixed by #1302
Closed
4 of 12 tasks

nc_test4 failed when using intel compilers #876

dongli opened this issue Feb 21, 2018 · 10 comments · Fixed by #1302

Comments

@dongli
Copy link

dongli commented Feb 21, 2018

Please provide as much of the following information as you can, as applicable to the issue being reported. Naturally, not all information is relevant to every issue, but the more information we have to start, the better!

Environment Information

Feel free to skip this if the issue is related to documentation, a feature request, or general discussion.

  • What platform are you using? (please provide specific distribution/version in summary)
    • Linux
    • Windows
    • OSX
    • Other
    • NA
  • 32 and/or 64 bit?
    • 32-bit
    • 64-bit
  • What build system are you using?
    • autotools (configure)
    • cmake
  • Can you provide a sample netCDF file or C code to recreate the issue?
    • Yes (please attach to this issue, thank you!)
    • No
    • Not at this time

Summary of Issue

I am install netcdf-c in a CentOS 7.4.1708 system with the following compilers:

  • C: icc version 13.1.0 (gcc version 4.7.0 compatibility)
  • Fortran: ifort version 13.1.0

The dependent zlib, szip, hdf5 (1.10.1) have already be installed. The make is done, but make check gave the following errors from nc_test4/test-suite.log:

===========================================
   netCDF 4.6.0: nc_test4/test-suite.log
===========================================

# TOTAL: 66
# PASS:  64
# SKIP:  0
# XFAIL: 0
# FAIL:  2
# XPASS: 0
# ERROR: 0

.. contents:: :depth: 2

FAIL: tst_filterparser
======================

mismatch: N.A. [7] baseline=0d params=3287505826
mismatch: N.A. [8] baseline=0d params=1097305129
mismatch2: ud.d [7-8] baseline=0d,0d params=3287505826,1097305129

Testing filter parser.
FAIL tst_filterparser (exit status: 1)

FAIL: tst_filter
================

findplugin.sh loaded
final HDF5_PLUGIN_PATH=/tmp/starman/netcdf-c/netcdf-4.6.0/nc_test4/hdf5plugins/.libs
*** Testing dynamic filters using API

*** Testing API: bzip2 compression.
show parameters for bzip2: level=9
show chunks: chunks=4,4,4,4

*** Testing API: bzip2 decompression.
data comparison: |array|=256
no data errors
*** Pass: API dynamic filter

*** Testing dynamic filters parameter passing
test1: compression.
test: nparams=14: params= 1 239 23 65511 27 77 93 1145389056 0 0 1 2147483648 4294967295 4294967295
 chunks=4,4,4,4
test1: decompression.
data comparison: |array|=256
no data errors
1c1
< var:_Filter = "32768,1,239,23,65511,27,77,93,1145389056,0,0,1,2147483648,4294967295,4294967295" ;
---
> var:_Filter = "32768,1,239,23,65511,27,77,93,1145389056,3287505826,1097305129,1,2147483648,4294967295,4294967295" ;
FAIL tst_filter.sh (exit status: 1)

Steps to reproduce the behavior

@DennisHeimbigner
Copy link
Collaborator

Are you by any chance running this on a big-endian machine?

@dongli
Copy link
Author

dongli commented Mar 2, 2018

@DennisHeimbigner Sorry, I have no access to big-endian machines.

@DennisHeimbigner
Copy link
Collaborator

Good, then we have a chance of solving the problem :-)

@justbennet
Copy link

I am reporting the same issue. Using Intel compilers 14.0.2, hdf5 1.8.20, CentOS 7.5. The configure line used was

./configure --prefix=/sw/arcts/centos7/intel_14_0_2/netcdf/4.6.0

All tests pass except the two filter tests.

[bennet@beta-build nc_test4]$ ./tst_filter.sh
findplugin.sh loaded
final HDF5_PLUGIN_PATH=/tmp/build/netcdf-4.6.0/nc_test4/hdf5plugins/.libs
*** Testing dynamic filters using API

*** Testing API: bzip2 compression.
show parameters for bzip2: level=9
show chunks: chunks=4,4,4,4

*** Testing API: bzip2 decompression.
data comparison: |array|=256
no data errors
*** Pass: API dynamic filter

*** Testing dynamic filters parameter passing
test1: compression.
test: nparams=14: params= 1 239 23 65511 27 77 93 1145389056 0 0 1 2147483648 4294967295 4294967295
 chunks=4,4,4,4
test1: decompression.
data comparison: |array|=256
no data errors
1c1
< var:_Filter = "32768,1,239,23,65511,27,77,93,1145389056,0,0,1,2147483648,4294967295,4294967295" ;
---
> var:_Filter = "32768,1,239,23,65511,27,77,93,1145389056,3287505826,1097305129,1,2147483648,4294967295,4294967295" ;

Output from the executables directly.

[bennet@beta-build nc_test4]$ ./test_filter

*** Testing API: bzip2 compression.
show parameters for bzip2: level=9
show chunks: chunks=4,4,4,4
fail (218): NetCDF: HDF error
[bennet@beta-build nc_test4]$ ./tst_filterparser 

Testing filter parser.
mismatch: N.A. [7] baseline=0d params=3287505826
mismatch: N.A. [8] baseline=0d params=1097305129
mismatch2: ud.d [7-8] baseline=0d,0d params=3287505826,1097305129

@DennisHeimbigner
Copy link
Collaborator

I looked again at the Intel web-site and it appears that there is still no way for Unidata
to obtain a free copy of the Intel icc compiler.
So I doubt that we will be able to solve this.
My only suggestion is to try reducing the level of optimization you using and
see if that fixes the problem.

@justbennet
Copy link

Thanks for the suggestion about optimization. I will try with -O0,

I also looked more closely at the log, it may be an error in the underlying hdf5, which I will try to trace.

@justbennet
Copy link

I looked at test_filter.c, and it appears to fail at

    CHECK(nc_enddef(ncid));

Is this by any chance using a feature of HDF5 that was introduced after 1.8.20? It appears to be getting an error from HDF5.

@DennisHeimbigner
Copy link
Collaborator

It is more likely that the netcdf-c library is calling the HDF5 API with some bad value.
The fact that it is at enddef indicates that it is a metadata error as opposed to
a data error.
We may be able to get more info if you do the following.

  1. rebuild the netcdf library with the extra ./configure options --enable-logging
  2. set this environment variable
    export NETCDF_LOG_LEVEL=5
    3.rerun the test (with the -x flag to sh).

@justbennet
Copy link

justbennet commented Oct 7, 2018 via email

DennisHeimbigner added a commit that referenced this issue Feb 1, 2019
re: issue #1278
re: issue #876
re: issue #806

* Major change to the handling of 8-byte parameters for nc_def_var_filter.
  The old code was not well thought out.
  * The new algorithm is documented in docs/filters.md.
  * Added new utility file plugins/H5Zutil.c to support
  * Modified plugins/H5Zmisc.c to use new algorithm
  the new algorithm.
  * Renamed include/ncfilter.h to include/netcdf_filter.h
    and made it an installed header so clients can access the
    new algorithm utility.
  * Fixed nc_test4/tst_filterparser.c and nc_test4/test_filter_misc.c
    to use the new algorithm
* libdap4/ fixes:
  * d4swap.c has an error in the endian pre-processing such
    that record counts were not being swapped correctly.
  * d4data.c had an error in that checksums were being computed
    after endian swapping rather than before.
* ocinitialize() was never being called, so xxdr bigendian handling
  was never set correctly.
  * Required adding debug statements to occompile
* Found and fixed memory leak in ncdump.c

Not tested:
* HDF4
* Pnetcdf
* parallel HDF5
@liubiyongge
Copy link

I meet the same error, could you tell me how to solve this problem?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants