Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Rust-CUDA is being rebooted! #130

Open
LegNeato opened this issue Jan 27, 2025 · 24 comments
Open

Rust-CUDA is being rebooted! #130

LegNeato opened this issue Jan 27, 2025 · 24 comments
Labels
enhancement New feature or request help wanted Extra attention is needed question Further information is requested

Comments

@LegNeato
Copy link
Contributor

LegNeato commented Jan 27, 2025

See https://rust-gpu.github.io/blog/2025/01/27/rust-cuda-reboot.

@RDambrosio016 has made me a new maintainer (I'm also a maintainer of rust-gpu).

Please comment here if you would like to be involved, or better yet put up some PRs or direct me to what needs to be done from your perspective!

@LegNeato LegNeato added enhancement New feature or request help wanted Extra attention is needed question Further information is requested labels Jan 27, 2025
@AnubhabB
Copy link

This is exciting! I'd love to contribute (if I can). Any areas to dig deep into? That would probably help with figuring out starting points and if at-all I'd be capable enough to contribute!

In any case .. cheers .. will closely follow how this evolves!

@LegNeato
Copy link
Contributor Author

I'm still orienting myself as to the current state. I'd start with just trying to get the examples running on your machine and see if you hit anything I do not! Thank you so much for (potentially) helping. 🍻

@David-OConnor
Copy link

David-OConnor commented Jan 27, 2025

The big thing is to make it work. Try it on a few different machines (OS, GPUs, CUDA versions etc), make it work on modern RustC and CUDA versions without errors. I switched to Cudarc because that is in a working state, and this isn't.

Dropping support for older versions of CUDA is fine if that makes it easier.

@apriori
Copy link

apriori commented Jan 27, 2025

The big thing is to make it work. Try it on a few different machines (OS, GPUs, CUDA versions etc), make it work on modern RustC and CUDA versions without errors. I switched to Cudarc because that is in a working state, and this isn't.

That will be quite some work. Rustc changed significantly, so did libNVVM.

@LegNeato As you are a maintainer of rust-gpu, I would be curious to know what in the end lead you to rust-cuda. Afaik rust-gpu did not enter the compute kernel area too much.

@LegNeato
Copy link
Contributor Author

@apriori Actually, Rust-GPU does have pretty good support for Vulkan compute! It's just that Embark and most current contributors are focused on graphics use-cases. I personally care more about GPGPU.

What lead me here is I see a lot of opportunities and overlap between the two projects. As an end user writing GPU code in Rust, what I really want is to not care about Vulkan vs CUDA as the output target at all, similar to how I don't care about linux vs windows when writing CPU rust (or arm vs x86_64 for that matter). Of course, we also need to expose platform-specific stuff for those wanting to get the most out of their hardware or ecosystem (similar to how rust on the CPU exposes platform specific apis or ISA-specific escape hatches), but the progressive disclosure of complexity is key.

This wasn't going to happen as two completely separate projects that only peek over the fence occasionally, or with rust-cuda no longer being developed. So I am involved in both and can hopefully bring them together where they are different for different's sake.

@txbm
Copy link

txbm commented Jan 27, 2025

Will contribute

@buk0vec
Copy link

buk0vec commented Jan 27, 2025

Would definitely love to help out, I think this is a really cool project

@Schmiedium
Copy link

I'd definitely like to contribute and get involved if I can. I'm currently a Master's Student at Georgia Tech and taking a Parallel algorithms course this semester. I have a few different machines, cards, and Operating systems I can try to put the current iteration on and see what issues pop up

@LegNeato
Copy link
Contributor Author

@Schmiedium Awesome! I think one thing everyone is going to hit is we are on a super old version of rust and cargo automatically upgrading versions will hit issues. I'm trying to untangle that a bit currently.

@mooreniemi
Copy link

mooreniemi commented Jan 28, 2025

I'm noticing this too @LegNeato - do you have a branch going or no?

@danielglin
Copy link

I'm a Rust beginner without any GPU programming experience, but I'd love to learn and help out where I can.

@Schmiedium
Copy link

Schmiedium commented Jan 28, 2025

So I had some time to play around with it. I'm running into two main issues, and they seem to be windows specific. This is on windows 10 with a 2080ti, CUDA 12.8 and Optix 8.1.

The first issue I think is on me, has to do with Nvidia Optix. I'm just having trouble getting it setup correctly i think, but the Optix examples fail to compile with the error that the OPTIX_ROOT_DIR or OPTIX_ROOT env variable isn't found. This points to the FindCUDAHelper looking for environment variables, but even with those set it still fails.

The second is ntapi. It looks like ntapi 0.3.7 includes code that is no longer valid rust. This issue seems to have first cropped up in 2022 and was fixed. You can see the issue here. l guess one of the dependencies somewhere in the dependency tree of this project may be using that version, causing that build error. I haven't yet been able to look into where that's being brought in, so not sure how difficult that would be to fix.

I should be able to try this out on NixOS tomorrow with the same hardware, so I'll check in tomorrow if I find anything there

One more comment, not an issue per say: looks like this project as of right now still requires nightly rust to build due to using #[feature] macros, so be aware of that as well.

I'd be interested to know what you guys find

@BurtonQin
Copy link

Glad to hear Rust CUDA is making a comeback! I went through the cust portion of the project last year. Since I work with both Rust and CUDA regularly and have experience with cudarc, I’d love to contribute. I also have an NVIDIA 4090 that could be useful for testing. Once the roadmap and contribution guidelines are ready, count me in to help out!

@ctrl-z-9000-times
Copy link

ctrl-z-9000-times commented Jan 30, 2025

Hello, I've been trying to use cust for the past few weeks and I have some *ideas* for how the library could be improved. I think now, if ever, would be a good time to break compatibility to polish the existing API.

In particular: some of the flags are useless and none of them implement the Default trait.

  • StreamWaitEventFlags does nothing, because the underlying cuda function (cudaStreamWaitEvent) doesnt use the flags (future proofing i guess?). I would remove this flag from the API, and in the future if you need to add options you can add another function with a distinct name (ex: stream.wait_event_foobar).
  • cust::init(cust::CudaFlags) same story StreamWaitEventFlags
  • StreamFlags does nothing because of an unsoundness issue StreamFlags::NON_BLOCKING is unsound because of fringe asynchronous memory copy behavior in CUDA #15. We should decide how we're going resolve this issue. This project can either accept the memory-safety hazard, or prohibit the option and lose those features. IMO we should prioritze fixing memory safety, and document the potential innefficiency issue. Either way, the current API was left in a somewhat broken state.

Edit: another potential compatibility break is issue: #110

I'm looking forward to seeing where this project will go!
Sincerely


P.P.S. Here's what I think you should do with the cust library (current version: 0.3.2).
Plan to do two releases:

  1. 0.3.3 a final patch release with any easy bug fixes that have accumulated in the past 3 years. Are any of those outstanding PR's worth the effort of a patch release?
  2. 0.4.0 which breaks compatability

@LegNeato, You should make a tracking issue to discuss what will be included in each release that you plan to do.

@Schmiedium
Copy link

I got the rest of the issues with my environment resolved. The main thing not building right now is the nvvm_codegen. It looks like there was another issue in this repo for resolving that, so I can play around with that and see if I can get it to build.

I also agree with ctrl-z, I think if we want to do some re-design or break compatibility, now would be the best time

@LegNeato
Copy link
Contributor Author

Yep, open to breaking whatever, let's get to latest. The plan was to switch off NVVM and onto ptx directly but after talking with NVIDIA I am not so sure that is the best way forward.

@LegNeato
Copy link
Contributor Author

LegNeato commented Jan 31, 2025

@Schmiedium you might want to look at rust -gpu 's forward porting as it has to deal with similar issues. I plan to take a look later this week as I largely did the other forward port, but if you get time go for it (just comment or start a draft or issue so we don't duplicate) 😁

@devillove084
Copy link

@LegNeato I'd like to share some observations on potential challenges with direct PTX usage and offer concrete ways I can
try help address them:

Key Challenges with PTX

  1. Toolchain Immaturity

    • Current PTX assembly workflows (e.g. ptxas integration) may lack Rust-friendly abstractions
    • Example: Manual memory alignment directives required for #[repr(C)] structs
  2. Debugging Friction

    • No mature PTX-level debugger integrated with rust-gdb/LLDB
    • Crash analysis requires manual mapping between PTX instructions and Rust source
  3. Optimization Burden

    • Missing auto-vectorization equivalent to NVVM's -opt=3
    • Developers must manually insert PTX pragmas (e.g. .reqntid 256)
  4. Cross-Architecture Support

    • JIT compilation via GPU driver may conflict with Rust's ABI stability goals
    • Need per-SM versioned PTX bundles (e.g. sm_80 vs sm_90)

@LegNeato
Copy link
Contributor Author

Great info! I the topic came up because it was mentioned by @RDambrosio016 in #98 (comment) and @kjetilkjeka is actively working on / using the nvptx backend in rustc so it is worth exploring the tradeoffs

@jorge-ortega
Copy link

The second is ntapi. It looks like ntapi 0.3.7 includes code that is no longer valid rust. ... I haven't yet been able to look into where that's being brought in, so not sure how difficult that would be to fix.

This is being pulled in through the path_tracer example. Added details to #120.

@skinnyBat
Copy link

I would love to contribute. I will first try and get the existing examples working on my setup.

@Schmiedium
Copy link

@jorge-ortega Thanks! I found the package, looks like it was an old version of sysinfo. I'm going to publish a branch for the forward port of the project to try and get all the dependencies updated.

And @LegNeato, thanks for the info on the rust-gpu forward port, I'll check that out for how they went about it

@kulst
Copy link

kulst commented Feb 2, 2025

Hey, great to see this crate being rebooted.

I am interested in contributing as well. I have some experience in using the nvptx backend from Rust. I think it really could be a viable alternative to the current nvvm codegen which is used from Rust-CUDA at the moment. My observations so far are:

  • It is possible to implement CUDA kernels with the nvptx backend in Rust. I implemented some simple ones like stencil operations, matrix multiplication and reduction
  • There are some simple intrinsics already available from Rust (like _syncthreads() or _block_idx_x())
  • For other instructions (like texture fetching or atomic operations on floats) it is either possible to link against the corresponding llvm.nvvm intrinsics or to use inline assembly
  • Using shared memory requires inline assembly at the moment but a solution for this is being actively discussed
  • debugging such kernels should be possible with cuda-gdb. The llvm bitcode linker rust tool (not the llvm application) currently strips out all debug information. However, I was able to debug a simple kernel by compiling it with -g -O1 and manually using opt and llc without stripping the debug information. In my case compiling with -O0 did produce invalid ptx regardless of whether debug information was included or not.

@mratsim
Copy link

mratsim commented Feb 4, 2025

Hello there,

I've been developing GPU kernels in Nim, Cuda and LLVM IR ? NVPTX for awhile including an LLVM based JIT compiler with both NVVM and NVPTX backends (see my Nim hello world with both backends https://github.com/mratsim/constantine/blob/v0.2.0/tests/gpu/hello_world_nvidia.nim#L107-L152)

Yep, open to breaking whatever, let's get to latest. The plan was to switch off NVVM and onto ptx directly but after talking with NVIDIA I am not so sure that is the best way forward.

The issue with NVVM is that they use LLVM IR v7.0.1 from december 2018 and the version just after 7.1.0 was a breaking change. Quoting myself:

⚠ NVVM IR is based on LLVM 7.0.1 IR which dates from december 2018.
There are a couple of caveats:

  • LLVM 7.0.1 is usually not available in repo, making installation difficult
  • There was a ABI breaking bug making the 7.0.1 and 7.1.0 versions messy (https://www.phoronix.com/news/LLVM-7.0.1-Released)
  • LLVM 7.0.1 does not have LLVMBuildCall2 and relies on the deprecated LLVMBuildCall meaning
    supporting that and latest LLVM (for AMDGPU and SPIR-V backends) will likely have heavy costs
  • When generating a add-with-carry kernel with inline ASM calls from LLVM-14,
    if the LLVM IR is passed as bitcode,
    the kernel content is silently discarded, this does not happen with built-in add.
    It is unsure if it's call2 or inline ASM incompatibility that causes the issues
  • When generating a add-with-carry kernel with inline ASM calls from LLVM-14,
    if the LLVM IR is passed as testual IR, the code is refused with NVVM_ERROR_INVALID_IR

Hence, using LLVM NVPTX backend instead of libNVVM is likely the sustainable way forward

There is a way to dowgrade LLVM IR which is what Julia is doing through https://github.com/JuliaGPU/GPUCompiler.jl in the following package https://github.com/JuliaLLVM/llvm-downgrade but they have to maintain a branch per LLVM release and it seems quite cumbersome.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request help wanted Extra attention is needed question Further information is requested
Projects
None yet
Development

No branches or pull requests