-
Notifications
You must be signed in to change notification settings - Fork 1.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support for HIP backend / AMD GPUs #46
Comments
I have thought about it. At some point in the not-so-distant past I had an AMD backend working through the AMDGPU LLVM target and the OpenCL runtime. It would be a fairly low amount of work to bring this back to life, though it would be preferrable to replace OpenCL by HIP. I am working full-time on Triton at OpenAI, and mostly focused on A100 at the moment. That said, I would be down to spend some time reviving AMD support in the Triton compiler (though it would not support newer matrix instructions) if someone was willing to help with runtime support -- that is, updating https://github.com/ptillet/triton/tree/master/lib/driver to have hip_{buffer,module,device,etc} classes that parallel cu_{buffer,module,device,etc.}. Happy to chat more about it if this is something AMD is willing to help with :) |
Yes, we can help with that. I'll start taking a look at https://github.com/ptillet/triton/tree/master/lib/driver. Any more specifics that you'd like to share? |
Since the community is quite small at this point, I can probably provide direct support for any question that you may have. Feel free to contact me at phil@openai.com with your e-mail address and I could invite you to our #triton slack channel :) I will also try to add a README.md for the driver/ directory soon. |
…ning_phase_2_unit_tests Add remaining Phase 2 subtests to test_core_amd.py.
Any progress on this issue? AMD HIP support would be much appreciated. |
We are in the process of upstreaming ROCm support. See the list of PRs relating to ROCm. https://github.com/openai/triton/pulls?q=is%3Apr+rocm+ For the recent PyTorch 2.0 release, we were using a fork of triton. https://github.com/ROCmSoftwarePlatform/triton/releases/tag/pytorch-triton-rocm-v2.0.1. This v2.0.1 tag corresponds to the wheels that can be downloaded from pypi. |
* Add matrix vector multiplication tutorial. * Fix: resolve review comment * Add test for: torch.matmul (reshape x to 2D), torch.matmul (transpose weight and flip the order), torch.nn.Linear * Change duplicated line_styles
* Add matrix vector multiplication tutorial. * Fix: resolve review comment * Add test for: torch.matmul (reshape x to 2D), torch.matmul (transpose weight and flip the order), torch.nn.Linear * Change duplicated line_styles
Do you plan to add support for a HIP backend in addition to the CUDA backend? HIP is a C++ Runtime API and Kernel Language that allows developers to create portable applications for AMD and NVIDIA GPUs from single source code.
Adding support for the HIP backend would enable the triton library to also support AMD GPUs. The HIP API very closely resembles the CUDA API, and we have tools such as hipify to allow us to easily "translate" most of the CUDA sources into HIP sources.
The text was updated successfully, but these errors were encountered: