Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix -Wmaybe-uninitialized errors #2517

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

stefwalter
Copy link

These come up when building FBGEMM as part of Pytorch or other stacks.

Copy link

netlify bot commented Apr 19, 2024

Deploy Preview for pytorch-fbgemm-docs ready!

Name Link
🔨 Latest commit 07af6a0
🔍 Latest deploy log https://app.netlify.com/sites/pytorch-fbgemm-docs/deploys/66220a55263c8800089f83b1
😎 Deploy Preview https://deploy-preview-2517--pytorch-fbgemm-docs.netlify.app
📱 Preview on mobile
Toggle QR Code...

QR Code

Use your smartphone camera to open QR code link.

To edit notification comments on pull requests, go to your Netlify site configuration.

These come up when building FBGEMM as part of Pytorch or other
stacks.
@stefwalter stefwalter force-pushed the maybe-uninitialized branch from 9164c16 to 07af6a0 Compare April 19, 2024 06:08
@stefwalter
Copy link
Author

Examples of the build problems:

/data/src/FBGEMM/src/FbgemmI8Depthwise3DAvx2.cc: In function ‘void fbgemm::depthwise_3d_same_pad_(const conv_param_t<3>&, int32_t, const uint8_t*, const int32_t*, const PackedDepthWiseConvMatrix&, const float*, int32_t, uint8_t*, const int32_t*, const BIAS_TYPE*, const float*, int, int) [with bool FUSE_RELU = true; bool HAS_BIAS = true; QuantizationGranularity Q_GRAN = fbgemm::QuantizationGranularity::GROUP; BIAS_TYPE = int]’:
/data/src/FBGEMM/src/FbgemmI8Depthwise3DAvx2.cc:657:46: note: ‘kernel’ was declared here
  657 |         GenI8Depthwise::jit_kernel_signature kernel;
      |                                              ^~~~~~
In function ‘void fbgemm::depthwise_3d_kernel_(int, int, int, int, int, int, int, int, std::array<int, 3>, int, int, int, int32_t, const uint8_t*, const int32_t*, const int8_t*, const float*, int32_t, int32_t*, uint8_t*, int32_t*, const int32_t*, const BIAS_TYPE*, const float*, void (**)(const uint8_t*, const int8_t*, int32_t*, int32_t*, int, int, int, const int*, int)) [with bool FUSE_RELU = true; bool HAS_BIAS = true; bool A_SYMMETRIC = true; bool B_SYMMETRIC = false; QuantizationGranularity Q_GRAN = fbgemm::QuantizationGranularity::GROUP; BIAS_TYPE = int]’,
    inlined from ‘void fbgemm::depthwise_3d_same_pad_(const conv_param_t<3>&, int32_t, const uint8_t*, const int32_t*, const PackedDepthWiseConvMatrix&, const float*, int32_t, int32_t*, uint8_t*, const int32_t*, const BIAS_TYPE*, const float*, int, int) [with bool FUSE_RELU = true; bool HAS_BIAS = true; bool A_SYMMETRIC = true; bool B_SYMMETRIC = false; QuantizationGranularity Q_GRAN = fbgemm::QuantizationGranularity::GROUP; BIAS_TYPE = int]’ at /data/src/FBGEMM/src/FbgemmI8Depthwise3DAvx2.cc:293:22,
    inlined from ‘void fbgemm::depthwise_3d_same_pad_(const conv_param_t<3>&, int32_t, const uint8_t*, const int32_t*, const PackedDepthWiseConvMatrix&, const float*, int32_t, uint8_t*, const int32_t*, const BIAS_TYPE*, const float*, int, int) [with bool FUSE_RELU = true; bool HAS_BIAS = true; QuantizationGranularity Q_GRAN = fbgemm::QuantizationGranularity::GROUP; BIAS_TYPE = int]’ at /data/src/FBGEMM/src/FbgemmI8Depthwise3DAvx2.cc:834:18:
/data/src/FBGEMM/src/FbgemmI8Depthwise3DAvx2.cc:86:9: error: ‘kernel’ may be used uninitialized [-Werror=maybe-uninitialized]
   86 |   kernel(
      |   ~~~~~~^
   87 |       A + ((t_in * H + h_in) * W + w_in) * IC,
      |       ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
   88 |       Bp,
      |       ~~~
   89 |       C_int32,
      |       ~~~~~~~~
   90 |       B_SYMMETRIC ? nullptr : row_offsets,
      |       ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
   91 |       H,
      |       ~~ 
   92 |       W,
      |       ~~ 
   93 |       IC,
      |       ~~~
   94 |       internal::avx2_ps_or_epi32_combined_mask,
      |       ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
   95 |       A_zero_point);
      |       ~~~~~~~~~~~~~
/data/src/FBGEMM/src/FbgemmI8Depthwise3DAvx2.cc: In function ‘void fbgemm::depthwise_3d_same_pad_(const conv_param_t<3>&, int32_t, const uint8_t*, const int32_t*, const PackedDepthWiseConvMatrix&, const float*, int32_t, uint8_t*, const int32_t*, const BIAS_TYPE*, const float*, int, int) [with bool FUSE_RELU = true; bool HAS_BIAS = true; QuantizationGranularity Q_GRAN = fbgemm::QuantizationGranularity::GROUP; BIAS_TYPE = int]’:
/data/src/FBGEMM/src/FbgemmI8Depthwise3DAvx2.cc:267:46: note: ‘kernel’ was declared here
  267 |         GenI8Depthwise::jit_kernel_signature kernel;
      |                                              ^~~~~~

And another:

...
  /usr/bin/c++ -DFBGEMM_STATIC -I/data/src/rebuilding-the-wheel/work-dir/pytorch-v2.2.2/pytorch-v2.2.2/third_party/cpuinfo/include -I/data/src/rebuilding-the-wheel/work-dir/pytorch-v2.2.2/pytorch-v2.2.2/third_party/fbgemm/third_party/asmjit/src -I/data/src/rebuilding-the-wheel/work-dir/pytorch-v2.2.2/pytorch-v2.2.2/third_party/fbgemm/include -I/data/src/rebuilding-the-wheel/work-dir/pytorch-v2.2.2/pytorch-v2.2.2/third_party/fbgemm -I/data/src/rebuilding-the-wheel/work-dir/pytorch-v2.2.2/pytorch-v2.2.2/cmake/../third_party/benchmark/include -isystem /data/src/rebuilding-the-wheel/work-dir/pytorch-v2.2.2/pytorch-v2.2.2/cmake/../third_party/googletest/googlemock/include -isystem /data/src/rebuilding-the-wheel/work-dir/pytorch-v2.2.2/pytorch-v2.2.2/cmake/../third_party/googletest/googletest/include -isystem /data/src/rebuilding-the-wheel/work-dir/pytorch-v2.2.2/pytorch-v2.2.2/third_party/protobuf/src -isystem /data/src/rebuilding-the-wheel/work-dir/pytorch-v2.2.2/pytorch-v2.2.2/third_party/gemmlowp -isystem /data/src/rebuilding-the-wheel/work-dir/pytorch-v2.2.2/pytorch-v2.2.2/third_party/neon2sse -isystem /data/src/rebuilding-the-wheel/work-dir/pytorch-v2.2.2/pytorch-v2.2.2/third_party/XNNPACK/include -D_GLIBCXX_USE_CXX11_ABI=1 -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -fopenmp -Wall -Wextra -Werror -Wno-deprecated-declarations -Wimplicit-fallthrough -O3 -DNDEBUG -std=c++17 -fPIC -fvisibility=hidden -m64 -mavx2 -mfma -mavx512f -mavx512bw -mavx512dq -mavx512vl -MD -MT third_party/fbgemm/CMakeFiles/fbgemm_avx512.dir/src/UtilsAvx512.cc.o -MF third_party/fbgemm/CMakeFiles/fbgemm_avx512.dir/src/UtilsAvx512.cc.o.d -o third_party/fbgemm/CMakeFiles/fbgemm_avx512.dir/src/UtilsAvx512.cc.o -c /data/src/rebuilding-the-wheel/work-dir/pytorch-v2.2.2/pytorch-v2.2.2/third_party/fbgemm/src/UtilsAvx512.cc
  In function ‘void fbgemm::internal::transpose_contiguous_16x2_block(const float*, float*, int64_t, int)’,
      inlined from ‘void fbgemm::internal::transpose_avx512_contiguous_thin(int64_t, int64_t, const T*, int64_t, T*, int64_t) [with T = float]’ at /data/src/rebuilding-the-wheel/work-dir/pytorch-v2.2.2/pytorch-v2.2.2/third_party/fbgemm/src/UtilsAvx512.cc:1827:38:
  /data/src/rebuilding-the-wheel/work-dir/pytorch-v2.2.2/pytorch-v2.2.2/third_party/fbgemm/src/UtilsAvx512.cc:970:35: error: ‘r’ may be used uninitialized [-Werror=maybe-uninitialized]
    970 |   d[0] = _mm512_permutex2var_epi32(r[0], index1, r[1]);
        |          ~~~~~~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~
  /data/src/rebuilding-the-wheel/work-dir/pytorch-v2.2.2/pytorch-v2.2.2/third_party/fbgemm/src/UtilsAvx512.cc: In function ‘void fbgemm::internal::transpose_avx512_contiguous_thin(int64_t, int64_t, const T*, int64_t, T*, int64_t) [with T = float]’:
  /data/src/rebuilding-the-wheel/work-dir/pytorch-v2.2.2/pytorch-v2.2.2/third_party/fbgemm/src/UtilsAvx512.cc:922:11: note: ‘r’ declared here
    922 |   __m512i r[2], d[2];
        |           ^
  cc1plus: all warnings being treated as errors

@markmc
Copy link

markmc commented Apr 22, 2024

@stefwalter seems this is a gcc issue, see #1666

This uninitialized variable issue is being addressed with #1697. However, this alone will not fix the build because there a known regression in GCC 12 (see here).

@stefwalter
Copy link
Author

@stefwalter seems this is a gcc issue, see #1666

This uninitialized variable issue is being addressed with #1697. However, this alone will not fix the build because there a known regression in GCC 12 (see here).

Hmmm, yup seems like that could well be the case. Here's what I'm building with (on Fedora 40):

stef@falcon:~/src/linux$ gcc -v
Using built-in specs.
COLLECT_GCC=gcc
COLLECT_LTO_WRAPPER=/usr/libexec/gcc/x86_64-redhat-linux/14/lto-wrapper
OFFLOAD_TARGET_NAMES=nvptx-none:amdgcn-amdhsa
OFFLOAD_TARGET_DEFAULT=1
Target: x86_64-redhat-linux
Configured with: ../configure --enable-bootstrap --enable-languages=c,c++,fortran,objc,obj-c++,ada,go,d,m2,lto --prefix=/usr --mandir=/usr/share/man --infodir=/usr/share/info --with-bugurl=http://bugzilla.redhat.com/bugzilla --enable-shared --enable-threads=posix --enable-checking=release --enable-multilib --with-system-zlib --enable-__cxa_atexit --disable-libunwind-exceptions --enable-gnu-unique-object --enable-linker-build-id --with-gcc-major-version-only --enable-libstdcxx-backtrace --with-libstdcxx-zoneinfo=/usr/share/zoneinfo --with-linker-hash-style=gnu --enable-plugin --enable-initfini-array --with-isl=/builddir/build/BUILD/gcc-14.0.1-20240411/obj-x86_64-redhat-linux/isl-install --enable-offload-targets=nvptx-none,amdgcn-amdhsa --enable-offload-defaulted --without-cuda-driver --enable-gnu-indirect-function --enable-cet --with-tune=generic --with-arch_32=i686 --build=x86_64-redhat-linux --with-build-config=bootstrap-lto --enable-link-serialization=1
Thread model: posix
Supported LTO compression algorithms: zlib zstd
gcc version 14.0.1 20240411 (Red Hat 14.0.1-0) (GCC) 

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants