Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

No XPU devices found while running on Docker #81

Open
v3vishal opened this issue Nov 17, 2024 · 5 comments
Open

No XPU devices found while running on Docker #81

v3vishal opened this issue Nov 17, 2024 · 5 comments
Assignees
Labels

Comments

@v3vishal
Copy link

v3vishal commented Nov 17, 2024

Hi there, I'm trying to get the intel-extension-for-tensorflow working on my Core Ultra 5 125H Arc Graphics, and I have made a docker container as in the docs. However, while running env_check.py, I get this as my output


__file__:     //env_check.py
Check Python
         Python 3.10.12 is Supported.
Check Python Passed

Check OS
        OS ubuntu:22.04 is Supported
Check OS Passed

Check Tensorflow
2024-11-17 07:52:27.501076: I tensorflow/core/util/port.cc:113] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
2024-11-17 07:52:27.504476: I external/local_tsl/tsl/cuda/cudart_stub.cc:31] Could not find cuda drivers on your machine, GPU will not be used. 
2024-11-17 07:52:27.553677: E external/local_xla/xla/stream_executor/cuda/cuda_dnn.cc:9261] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
2024-11-17 07:52:27.553752: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:607] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
2024-11-17 07:52:27.556099: E external/local_xla/xla/stream_executor/cuda/cuda_blas.cc:1515] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2024-11-17 07:52:27.567004: I external/local_tsl/tsl/cuda/cudart_stub.cc:31] Could not find cuda drivers on your machine, GPU will not be used. 
2024-11-17 07:52:27.567281: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 AVX_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2024-11-17 07:52:29.058960: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
2024-11-17 07:52:30.818112: W external/local_tsl/tsl/lib/monitoring/collection_registry.cc:81] Trying to register 2 metrics with the same name: /tensorflow/core/bfc_allocator_delay. The old value will be erased in order to register a new one. Please check if you link the metric more than once, or if the name is already used by other metrics.
2024-11-17 07:52:30.818389: W external/local_tsl/tsl/lib/monitoring/collection_registry.cc:81] Trying to register 2 metrics with the same name: /xla/service/gpu/compiled_programs_count. The old value will be erased in order to register a new one. Please check if you link the metric more than once, or if the name is already used by other metrics.
2024-11-17 07:52:30.821138: W external/local_tsl/tsl/lib/monitoring/collection_registry.cc:81] Trying to register 2 metrics with the same name: /jax/pjrt/pjrt_executable_executions. The old value will be erased in order to register a new one. Please check if you link the metric more than once, or if the name is already used by other metrics.
2024-11-17 07:52:30.821247: W external/local_tsl/tsl/lib/monitoring/collection_registry.cc:81] Trying to register 2 metrics with the same name: /jax/pjrt/pjrt_executable_execution_time_usecs. The old value will be erased in order to register a new one. Please check if you link the metric more than once, or if the name is already used by other metrics.
2024-11-17 07:52:31.173258: I itex/core/wrapper/itex_gpu_wrapper.cc:38] Intel Extension for Tensorflow* GPU backend is loaded.
2024-11-17 07:52:31.174278: I external/local_xla/xla/pjrt/pjrt_api.cc:67] PJRT_Api is set for device type xpu
2024-11-17 07:52:31.174314: I external/local_xla/xla/pjrt/pjrt_api.cc:72] PJRT plugin for XPU has PJRT API version 0.33. The framework PJRT API version is 0.34.
2024-11-17 07:52:31.206038: E external/intel_xla/xla/stream_executor/sycl/sycl_gpu_runtime.cc:178] Can not found any devices.
2024-11-17 07:52:31.206283: E itex/core/kernels/xpu_kernel.cc:60] Failed precondition: No visible XPU devices. To check runtime environment on your host, please run itex/tools/python/env_check.py.
If you need help, create an issue at https://github.com/intel/intel-extension-for-tensorflow/issues
2024-11-17 07:52:31.255435: E itex/core/devices/gpu/itex_gpu_runtime.cc:174] Can not found any devices. To check runtime environment on your host, please run itex/tools/python/env_check.py.
If you need help, create an issue at https://github.com/intel/intel-extension-for-tensorflow/issues
        Tensorflow 2.15.1 is installed.
Check Tensorflow Passed

Check Intel GPU Driver
Package: intel-level-zero-gpu
Status: install ok installed
Priority: optional
Section: libs
Installed-Size: 28239
Maintainer: Intel Graphics Team <linux-graphics@intel.com>
Architecture: amd64
Source: intel-compute-runtime
Version: 1.3.27642.50-803~22.04
Depends: libc6 (>= 2.34), libgcc-s1 (>= 3.4), libigdgmm12 (>= 22.3.15), libstdc++6 (>= 12), libigc1 (>= 1.0.12812), libigdfcl1 (>= 1.0.12812), libnl-3-200, libnl-route-3-200
Description: Intel(R) Graphics Compute Runtime for oneAPI Level Zero.
 Level Zero is the primary low-level interface for language and runtime
 libraries. Level Zero offers fine-grain control over accelerators
 capabilities, delivering a simplified and low-latency interface to
 hardware, and efficiently exposing hardware capabilities to applications.
Homepage: https://github.com/oneapi-src/level-zero
Original-Maintainer: Debian OpenCL Maintainers <pkg-opencl-devel@lists.alioth.debian.org>
Package: intel-opencl-icd
Status: install ok installed
Priority: optional
Section: libs
Installed-Size: 23865
Maintainer: Intel Graphics Team <linux-graphics@intel.com>
Architecture: amd64
Source: intel-compute-runtime
Version: 23.43.27642.50-803~22.04
Replaces: intel-opencl
Provides: opencl-icd
Depends: libc6 (>= 2.34), libgcc-s1 (>= 3.4), libigdgmm12 (>= 22.3.15), libstdc++6 (>= 12), ocl-icd-libopencl1, libigc1 (>= 1.0.12812), libigdfcl1 (>= 1.0.12812)
Recommends: intel-igc-cm (>= 1.0.100)
Breaks: intel-opencl
Conffiles:
 /etc/OpenCL/vendors/intel.icd d0a34d0b4f75385c56ee357bb1b8e2d0
Description: Intel graphics compute runtime for OpenCL
 The Intel(R) Graphics Compute Runtime for OpenCL(TM) is a open source
 project to converge Intel's development efforts on OpenCL(TM) compute
 stacks supporting the GEN graphics hardware architecture.
 .
 Supported platforms:
 - Intel Core Processors with Gen8 GPU (Broadwell) - OpenCL 2.1
 - Intel Core Processors with Gen9 GPU (Skylake, Kaby Lake, Coffee Lake) - OpenCL 2.1
 - Intel Atom Processors with Gen9 GPU (Apollo Lake, Gemini Lake) - OpenCL 1.2
 - Intel Core Processors with Gen11 GPU (Ice Lake) - OpenCL 2.1
 - Intel Core Processors with Gen12 graphics devices (formerly Tiger Lake) - OpenCL 2.1
Homepage: https://github.com/intel/compute-runtime
Original-Maintainer: Debian OpenCL Maintainers <pkg-opencl-devel@lists.alioth.debian.org>
Package: level-zero
Status: install ok installed
Priority: optional
Section: libs
Installed-Size: 1049
Maintainer: Intel Graphics Team <linux-graphics@intel.com>
Architecture: amd64
Source: level-zero-loader
Version: 1.14.0-744~22.04
Depends: libc6 (>= 2.34), libgcc-s1 (>= 3.3.1), libstdc++6 (>= 11)
Description: Intel(R) Graphics Compute Runtime for oneAPI Level Zero.
 Level Zero is the primary low-level interface for language and runtime
 libraries. Level Zero offers fine-grain control over accelerators
 capabilities, delivering a simplified and low-latency interface to
 hardware, and efficiently exposing hardware capabilities to applications.
 .
 This package provides the loader for oneAPI Level Zero compute runtimes.
Homepage: https://github.com/oneapi-src/level-zero
Package: libigc1
Status: install ok installed
Priority: optional
Section: libs
Installed-Size: 86364
Maintainer: Intel Graphics Team <linux-graphics@intel.com>
Architecture: amd64
Source: intel-graphics-compiler
Version: 1.0.15468.29-803~22.04
Depends: libc6 (>= 2.34), libgcc-s1 (>= 3.4), libstdc++6 (>= 12), zlib1g (>= 1:1.2.2)
Description: Intel graphics compiler for OpenCL -- core libs
 The Intel(R) Graphics Compiler for OpenCL(TM) is an llvm based compiler
 for OpenCL(TM) targeting Intel Gen graphics hardware architecture.
 .
 This package includes the core libraries.
Homepage: https://github.com/intel/intel-graphics-compiler
Original-Maintainer: Debian OpenCL team <pkg-opencl-devel@lists.alioth.debian.org>
Package: libigdfcl1
Status: install ok installed
Priority: optional
Section: libs
Installed-Size: 116046
Maintainer: Intel Graphics Team <linux-graphics@intel.com>
Architecture: amd64
Source: intel-graphics-compiler
Version: 1.0.15468.29-803~22.04
Depends: libc6 (>= 2.34), libgcc-s1 (>= 3.4), libstdc++6 (>= 11), zlib1g (>= 1:1.2.0), libz3-4 (>= 4.7.1)
Description: Intel graphics compiler for OpenCL -- OpenCL library
 The Intel(R) Graphics Compiler for OpenCL(TM) is an llvm based compiler
 for OpenCL(TM) targeting Intel Gen graphics hardware architecture.
 .
 This package includes the library for OpenCL.
Homepage: https://github.com/intel/intel-graphics-compiler
Original-Maintainer: Debian OpenCL team <pkg-opencl-devel@lists.alioth.debian.org>
Package: libigdgmm12
Status: install ok installed
Priority: optional
Section: libs
Installed-Size: 648
Maintainer: Intel Graphics Team <linux-graphics@intel.com>
Architecture: amd64
Multi-Arch: same
Source: intel-gmmlib
Version: 22.3.15-803~22.04
Replaces: libigdgmm11
Depends: libc6 (>= 2.34), libgcc-s1 (>= 3.3.1), libstdc++6 (>= 4.1.1)
Description: Intel Graphics Memory Management Library -- shared library
 The Intel Graphics Memory Management Library provides device specific
 and buffer management for the Intel Graphics Compute Runtime for
 OpenCL and the Intel Media Driver for VAAPI.
 .
 This library is only useful for Broadwell and newer CPUs.
 .
 This package includes the shared library.
Homepage: https://github.com/intel/gmmlib
Original-Maintainer: Debian Multimedia Maintainers <debian-multimedia@lists.debian.org>
Check Intel GPU Driver Passsed

Check OneAPI
       223:     find library=libsycl.so.7 [0]; searching
       223:       trying file=/usr/local/lib/python3.10/dist-packages/tensorflow-plugins/../intel_extension_for_tensorflow/libsycl.so.7
       223:       trying file=/opt/intel/oneapi/redist/lib/libsycl.so.7
       223:     calling init: /opt/intel/oneapi/redist/lib/libsycl.so.7
       223:     calling fini: /opt/intel/oneapi/redist/lib/libsycl.so.7 [0]
        Intel(R) OneAPI DPC++/C++ Compiler is Installed.
Recommended dpcpp version is 2024.2.1-1079
       223:     find library=libmkl_sycl_blas.so.4 [0]; searching
       223:       trying file=/usr/local/lib/python3.10/dist-packages/tensorflow-plugins/../intel_extension_for_tensorflow/libmkl_sycl_blas.so.4
       223:       trying file=/opt/intel/oneapi/redist/lib/libmkl_sycl_blas.so.4
       223:     calling init: /opt/intel/oneapi/redist/lib/libmkl_sycl_blas.so.4
       223:     calling fini: /opt/intel/oneapi/redist/lib/libmkl_sycl_blas.so.4 [0]
       223:     find library=libmkl_sycl_lapack.so.4 [0]; searching
       223:       trying file=/usr/local/lib/python3.10/dist-packages/tensorflow-plugins/../intel_extension_for_tensorflow/libmkl_sycl_lapack.so.4
       223:       trying file=/opt/intel/oneapi/redist/lib/libmkl_sycl_lapack.so.4
       223:     calling init: /opt/intel/oneapi/redist/lib/libmkl_sycl_lapack.so.4
       223:     calling fini: /opt/intel/oneapi/redist/lib/libmkl_sycl_lapack.so.4 [0]
       223:     find library=libmkl_sycl_dft.so.4 [0]; searching
       223:       trying file=/usr/local/lib/python3.10/dist-packages/tensorflow-plugins/../intel_extension_for_tensorflow/libmkl_sycl_dft.so.4 
       223:       trying file=/opt/intel/oneapi/redist/lib/libmkl_sycl_dft.so.4
       223:     calling init: /opt/intel/oneapi/redist/lib/libmkl_sycl_dft.so.4
       223:     calling fini: /opt/intel/oneapi/redist/lib/libmkl_sycl_dft.so.4 [0]
        Intel(R) OneAPI Math Kernel Library is Installed.
Recommended onemkl version is 2024.2.1-103
Check OneAPI Passed

Check Tensorflow Requirements

Check Intel(R) Extension for TensorFlow* Requirements Passed

I'm unable to get XPU devices detected, but /dev/dri shows card0 which means my card is attached

I'm on a time crunch right now, so any help would be immensely appreciated.

I've tried WSL, and docker is the only one working with TensorFlow and oneAPI

Thanks in advance!

@srinarayan-srikanthan
Copy link

Hi @v3vishal , did you build the container or pull it from dockerhub?

@v3vishal
Copy link
Author

Pulled it from docker hub.

@srinarayan-srikanthan
Copy link

Okay, can you try steps here to make sure we set up the container the rite way : https://github.com/intel/ai-containers/tree/main/preset#run-on-gpu

@rverma-dev
Copy link

Facing same issue with intel Arc GPU. If I am running clinfo within the docker image it also reflect the correct graphic card

clinfo
Number of platforms                               1
  Platform Name                                   Intel(R) OpenCL
  Platform Vendor                                 Intel(R) Corporation
  Platform Version                                OpenCL 3.0 LINUX
  Platform Profile                                FULL_PROFILE
  Platform Extensions                             cl_khr_spirv_linkonce_odr cl_khr_fp64 cl_khr_fp16 cl_khr_global_int32_base_atomics cl_khr_global_int32_extended_atomics cl_khr_local_int32_base_atomics cl_khr_local_int32_extended_atomics cl_khr_3d_image_writes cl_khr_byte_addressable_store cl_khr_depth_images cl_khr_extended_bit_ops cl_khr_icd cl_khr_il_program cl_khr_suggested_local_work_size cl_intel_unified_shared_memory cl_intel_devicelib_assert cl_khr_subgroup_ballot cl_khr_subgroup_shuffle cl_khr_subgroup_shuffle_relative cl_khr_subgroup_extended_types cl_khr_subgroup_non_uniform_arithmetic cl_khr_subgroup_non_uniform_vote cl_khr_subgroup_clustered_reduce cl_intel_subgroups cl_intel_subgroups_char cl_intel_subgroups_short cl_intel_subgroups_long cl_intel_required_subgroup_size cl_intel_spirv_subgroups cl_khr_int64_base_atomics cl_khr_int64_extended_atomics cl_intel_device_attribute_query cl_intel_exec_by_local_thread cl_intel_vec_len_hint cl_intel_device_partition_by_names cl_khr_spir cl_khr_image2d_from_buffer cl_intel_concurrent_dispatch

Whereas tf.config.list_physical_devices("XPU") doesn't list any GPU at all. Here are the relevant logs

2024-12-19 10:52:08.894583: I itex/core/wrapper/itex_gpu_wrapper.cc:38] Intel Extension for Tensorflow* GPU backend is loaded.
2024-12-19 10:52:08.895041: I external/local_xla/xla/pjrt/pjrt_api.cc:67] PJRT_Api is set for device type xpu
2024-12-19 10:52:08.895070: I external/local_xla/xla/pjrt/pjrt_api.cc:72] PJRT plugin for XPU has PJRT API version 0.33. The framework PJRT API version is 0.34.
2024-12-19 10:52:08.910143: E external/intel_xla/xla/stream_executor/sycl/sycl_gpu_runtime.cc:178] Can not found any devices.
2024-12-19 10:52:08.910239: E itex/core/kernels/xpu_kernel.cc:60] Failed precondition: No visible XPU devices. To check runtime environment on your host, please run itex/tools/python/env_check.py.
If you need help, create an issue at https://github.com/intel/intel-extension-for-tensorflow/issues
2024-12-19 10:52:08.941591: E itex/core/devices/gpu/itex_gpu_runtime.cc:174] Can not found any devices. To check runtime environment on your host, please run itex/tools/python/env_check.py.
If you need help, create an issue at https://github.com/intel/intel-extension-for-tensorflow/issues

@yinghu5
Copy link

yinghu5 commented Dec 20, 2024

Hi Rverma-Dev,
could you please help to get the output of running env_check.py under https://github.com/intel/intel-extension-for-tensorflow/tree/main/tools/python

or install the new release :
https://github.com/intel/intel-extension-for-tensorflow/releases/tag/v2.15.0.2 in conda environment.

thanks

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

4 participants