-
Notifications
You must be signed in to change notification settings - Fork 66
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Split KernelAbstractions into frontend and backends #200
Conversation
This is cool! I'm guessing we'll need to separately register these sub-packages in General? |
On the one hand I'm happy to put that stuff in CUDA.jl, but this feels way nicer, and is easier to develop in tandem with KA.jl. |
Yeah I need to figure out tests first. The two things I would like to do, which I don't think this allows for is:
|
Is the plan that you'll register |
Personally I'm not a big fan of the "multiple packages in a single GitHub repo" approach. (Despite the fact that I once asked for that feature 😂.) From my observation, it ends up causing some difficult issues down the line. Would you be open to instead putting CPUKernels in its own GitHub repo? |
Such as? Developing the interface between these packages is much easier if you can do it in a single PR rather than having to create PRs across packages, deal with semver and/or work with Manifests to make sure changes are picked up, duplicate (reverse) CI across packages, etc. |
The main problem is if you ever decide in the future that you want to put the package in a separate GitHub repo. You now have to figure out a way to make sure that the new repo contains all of the same trees that the package had when it was in this repo. There used to be a subdir package in SnoopCompile.jl called SnoopCompileBot.jl. They wanted to split it out, but the process ended up being painful enough that they just made a new package (CompileBot.jl). |
If the packages are that closely related, I would say they should just be the same package? |
What's the main motivation of splitting KernelAbstractions and CPUKernels into separate packages? Is it so that people can load KernelAbstractions without having to load CUDA? If so, instead of making CPUKernels a different package, you could make it a submodule, and then lazily load that submodule using Requires. |
Blasphemy! Requires doesn't respect semver, is hard to PackageCompile, etc etc. The goal is indeed to prevent a dependency on both CUDA/AMDGPU/..., and the only proper way to do so currently is to create more packages. Maybe with Preferences we could better do Requires-style depends, but then you still have the package dependency on both CUDA.jl and AMDGPU.jl (even though you may not end up loading the package), which is unacceptable. |
This is what I had in mind when I suggested Requires - you keep the dependency in the |
Unless we get a proper solution to JuliaLang/Pkg.jl#1285 |
Sure, but that's not going to happen anytime soon. |
I was taking the idea to it's extreme. But yes the general idea is to be able to use KerneAbstractions just with AMDGPU.jl or with CUDA.jl and not force a dependency on both. |
It's also not necessarily possible to depend on both CUDA.jl and AMDGPU.jl at the same time; AMDGPU.jl tends to lag significantly behind CUDA.jl, which can cause version incompatibilities between shared deps, like GPUArrays or Adapt. |
Added new package CUDAKernels.jl Co-authored-by: Julian P Samaroo <jpsamaroo@jpsamaroo.me>
bors try |
tryBuild failed: |
Thanks Julian! As discussed on Slack we should test the version of @maleadt ideas on how to get Pkg to agree with us on that? |
bors try |
tryBuild failed: |
bors try |
tryBuild failed: |
bors try |
tryBuild failed: |
bors try |
tryBuild failed: |
bors try |
commands: | ||
- julia --project=test -e 'using Pkg; Pkg.develop(PackageSpec(path=pwd()))' | ||
- julia --project=test -e 'using Pkg; Pkg.develop(PackageSpec(path=joinpath(pwd(),"lib","CUDAKernels")))' | ||
- julia --project=test --color=yes --check-bounds=yes --code-coverage=user --depwarn=yes test/runtests.jl |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just for the record: it should be possible to put back JULIA_LOAD_PATH=@
once a fix like JuliaPackaging/BinaryBuilder.jl#1007 is implemented
(But using JULIA_LOAD_PATH=@
was just my preference for making sure that test is reproducible; i.e., avoid accidentally using @stdlib
and @v#.#
. It's not a strict requirement to make CI work.)
@DilumAluthge do you remember why we have three different CI files? Is it because of bors? |
It's for the status badges in the README, I believe. You cannot generate more than one status badge for a single GHA workflow file. |
bors r+ |
In order to have packages not need to depend on both AMDGPU.jl and CUDA.jl through KernelAbstractions.jl
The other alternative is to move the code into CUDA.jl or AMDGPU.jl, thoughts?
@maleadt @jpsamaroo