Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Is cuda avaliable: false in Rust but true in C++ #904

Open
Davidos533 opened this issue Oct 26, 2024 · 3 comments
Open

Is cuda avaliable: false in Rust but true in C++ #904

Davidos533 opened this issue Oct 26, 2024 · 3 comments

Comments

@Davidos533
Copy link

Davidos533 commented Oct 26, 2024

Hello
I have problem a long time can't fix it

systeminfo
OS Name: Microsoft Windows 10 Pro
OS Version: 10.0.19045 N/A Build 19045
GPU: 
nvidia-smi
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 560.94                 Driver Version: 560.94         CUDA Version: 12.6     |
|-----------------------------------------+------------------------+----------------------+
| GPU  Name                  Driver-Model | Bus-Id          Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |           Memory-Usage | GPU-Util  Compute M. |
|                                         |                        |               MIG M. |
|=========================================+========================+======================|
|   0  NVIDIA GeForce GTX 1060 3GB  WDDM  |   00000000:01:00.0  On |                  N/A |
| 28%   37C    P8             11W /  120W |    1170MiB /   3072MiB |      1%      Default |
|                                         |                        |                  N/A |
+-----------------------------------------+------------------------+----------------------+
nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2022 NVIDIA Corporation
Built on Wed_Sep_21_10:41:10_Pacific_Daylight_Time_2022
Cuda compilation tools, release 11.8, V11.8.89
Build cuda_11.8.r11.8/compiler.31833905_0

PyTorch 2.5.0 C++ Libtorch version cu118
dowloaded from here:

https://download.pytorch.org/libtorch/cu118/libtorch-win-shared-with-deps-2.5.0%2Bcu118.zip

all variables set for user and as system:
(path to libtorch) PATH:

C:\FILES\libtorch
C:\FILES\libtorch\lib
LIBTORCH_INCLUDE=C:\FILES\libtorch
LIBTORCH_LIB=C:\FILES\libtorch
LIBTORCH=C:\FILES\libtorch
CUDA_HOME=...v11.8
CUDA_PATH=...v11.8
CUDA_PATH=...v11.8

i'am trying to use this example https://github.com/LaurentMazare/tch-rs/tree/main/examples/char-rnn

.cargo/config.toml

[env]
LIBTORCH="C:/FILES/libtorch"

with one addition at line 42: println!("{}", device.is_cuda());
and i when start i'am getting:

**false** (Is cuda avaliable: false)

AND Most intresing thing!
When i build it by C++ by this instruction

https://pytorch.org/cppdocs/installing.html

my code like:

#include <torch/torch.h>
#include <iostream>

int main() {
    if (torch::cuda::is_available()) {
        std::cout << "CUDA is available! Running on GPU." << std::endl;
    } else {
        std::cout << "CUDA is not available. Running on CPU." << std::endl;
    }

    return 0;
}

my CMakeLists.txt like

cmake_minimum_required(VERSION 3.18 FATAL_ERROR)
project(example-app)

find_package(Torch REQUIRED)
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} ${TORCH_CXX_FLAGS}")

add_executable(example-app main.cpp)
target_link_libraries(example-app "${TORCH_LIBRARIES}")
set_property(TARGET example-app PROPERTY CXX_STANDARD 17)

# The following code block is suggested to be used on Windows.
# According to https://github.com/pytorch/pytorch/issues/25457,
# the DLLs need to be copied to avoid memory errors.
if (MSVC)
  file(GLOB TORCH_DLLS "${TORCH_INSTALL_PREFIX}/lib/*.dll")
  add_custom_command(TARGET example-app
                     POST_BUILD
                     COMMAND ${CMAKE_COMMAND} -E copy_if_different
                     ${TORCH_DLLS}
                     $<TARGET_FILE_DIR:example-app>)
endif (MSVC)

it's compiles and i get:

CUDA is available! Running on GPU.

Please help me

Edit:
I'am compared Rust compiled app with C++ compiled app by Dependency Walker
In Rust app no TORCH_CUDA.dll dep

C++ app deps:
image

Rust app deps:
image

@Davidos533
Copy link
Author

Davidos533 commented Oct 27, 2024

omg guys
I lost more than a week
now it works!

i'am downgrade to tch-rs version: 0.17.0
and use this libtorch version:

https://download.pytorch.org/libtorch/cu118/libtorch-win-shared-with-deps-2.4.0%2Bcu118.zip

and torch_cuda.dll in deps!

Dump of file C:\FILES\Projects\street-recognizer\target\release\street-recognizer.exe

File Type: EXECUTABLE IMAGE

  Image has the following dependencies:

    bcryptprimitives.dll
    api-ms-win-core-synch-l1-2-0.dll
    torch_cuda.dll
    torch_cpu.dll
    c10.dll
    KERNEL32.dll
    ntdll.dll
    MSVCP140.dll
    VCRUNTIME140.dll
    VCRUNTIME140_1.dll
    api-ms-win-crt-string-l1-1-0.dll
    api-ms-win-crt-heap-l1-1-0.dll
    api-ms-win-crt-math-l1-1-0.dll
    api-ms-win-crt-runtime-l1-1-0.dll
    api-ms-win-crt-stdio-l1-1-0.dll
    api-ms-win-crt-environment-l1-1-0.dll
    api-ms-win-crt-locale-l1-1-0.dll

  Summary

        3000 .data
        6000 .pdata
       2E000 .rdata
        1000 .reloc
       96000 .text

and if i started it
Is cuda avaliable: true

But version tch-rs 0.18.0
not works!

@Anivie
Copy link

Anivie commented Nov 9, 2024

The answer in this link worked for me.

@mulingya
Copy link

mulingya commented Dec 2, 2024

@Davidos533 Thank you very much for sharing. I used your suggestion to install and successfully used CUDA for calculation. The entire process only took half an hour. My environment variable configuration is more concise. Here is the complete configuration:

On windows:

  1. tch-rs version: 0.17.0
  2. libtorch version: https://download.pytorch.org/libtorch/cu118/libtorch-win-shared-with-deps-2.4.0%2Bcu118.zip
  3. nvidia cuda: https://developer.nvidia.com/cuda-11-8-0-download-archive
  4. variables set:
LIBTORCH: C:\libtorch
Path: C:\libtorch\lib

This is my test code:

use tch::Tensor;

fn main() {
    println!("Cuda available: {}", tch::Cuda::is_available());
    println!("Has cuda: {}", tch::utils::has_cuda());
    println!("Cudnn available: {}", tch::Cuda::cudnn_is_available());

    let t = Tensor::from_slice(&[3, 1, 4, 1, 5]);
    let t = t * 2;
    t.print();
}

output:

Cuda available: true
Has cuda: true
Cudnn available: true
  6
  2
  8
  2
 10
[ CPUIntType{5} ]

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants