-
-
Notifications
You must be signed in to change notification settings - Fork 5.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
META: CUDA external language implementation #18338
Conversation
fb9155a
to
796d182
Compare
11a8e58
to
bf6d1c6
Compare
Small update: LLVM.jl is now powerful enough to fully implement the PTX JIT (which is pretty simple). |
Would be cool to keep in mind a Webassembly usecase for this: https://github.com/WebAssembly/wasm-jit-prototype |
Yes, definitely. Implementation wise, we can look at Rust for inspiration as they have a functional LLVM-based wasm target nowadays. Do you have any use-cases in mind? |
Well my (dream/pony) high level usecases include distributing fully client side interactive reports and machine learning mobile apps. |
4c4e504
to
767bf64
Compare
Rebased on top of #18496. I've been working on similar 'codegen params' in the JuliaGPU/julia repo, but that still needs to be fleshed out. |
8536254
to
0487033
Compare
0487033
to
87862c6
Compare
edca252
to
00c0db3
Compare
070a9c7
to
c7115eb
Compare
c7115eb
to
c263a6e
Compare
I've removed commits from this PR, making it a tracking issue instead. Code has moved to tb/cuda. |
2a758ac
to
7762b56
Compare
This is outdated. |
Summary: make inference and codegen modular and configurable for packages to reuse, eg., when generating code for a different platform. This way, we avoid bloating the compiler, and make it possible to develop and import new hardware support without requiring modifications to the compiler/base.
Concretely, this PR will track the necessary additions to Julia
master
and host the remaining diff to support the CUDA compiler over at CUDAnative.jl.Inference
We need to influence inference in order to select alternative functions for some stdlib functionality (think
sin
, which now calls out to libm but needs to call another library), sometimes even depending on the GPU version.Codegen
Much of this is similar to inference:
Longer-term:
Support