Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support for HIP backend / AMD GPUs #46

Open
jithunnair-amd opened this issue Dec 22, 2020 · 6 comments
Open

Support for HIP backend / AMD GPUs #46

jithunnair-amd opened this issue Dec 22, 2020 · 6 comments

Comments

@jithunnair-amd
Copy link

Do you plan to add support for a HIP backend in addition to the CUDA backend? HIP is a C++ Runtime API and Kernel Language that allows developers to create portable applications for AMD and NVIDIA GPUs from single source code.

Adding support for the HIP backend would enable the triton library to also support AMD GPUs. The HIP API very closely resembles the CUDA API, and we have tools such as hipify to allow us to easily "translate" most of the CUDA sources into HIP sources.

@ptillet
Copy link
Collaborator

ptillet commented Dec 23, 2020

I have thought about it. At some point in the not-so-distant past I had an AMD backend working through the AMDGPU LLVM target and the OpenCL runtime. It would be a fairly low amount of work to bring this back to life, though it would be preferrable to replace OpenCL by HIP.

I am working full-time on Triton at OpenAI, and mostly focused on A100 at the moment. That said, I would be down to spend some time reviving AMD support in the Triton compiler (though it would not support newer matrix instructions) if someone was willing to help with runtime support -- that is, updating https://github.com/ptillet/triton/tree/master/lib/driver to have hip_{buffer,module,device,etc} classes that parallel cu_{buffer,module,device,etc.}. Happy to chat more about it if this is something AMD is willing to help with :)

@jithunnair-amd
Copy link
Author

cc @jeffdaily @sunway513

@jithunnair-amd
Copy link
Author

Yes, we can help with that. I'll start taking a look at https://github.com/ptillet/triton/tree/master/lib/driver. Any more specifics that you'd like to share?

@ptillet
Copy link
Collaborator

ptillet commented Jan 8, 2021

Since the community is quite small at this point, I can probably provide direct support for any question that you may have. Feel free to contact me at phil@openai.com with your e-mail address and I could invite you to our #triton slack channel :)

I will also try to add a README.md for the driver/ directory soon.

dfukalov pushed a commit to dfukalov/triton that referenced this issue Dec 17, 2022
…ning_phase_2_unit_tests

Add remaining Phase 2 subtests to test_core_amd.py.
@gururise
Copy link

gururise commented Apr 5, 2023

Any progress on this issue? AMD HIP support would be much appreciated.

@jeffdaily
Copy link

We are in the process of upstreaming ROCm support. See the list of PRs relating to ROCm. https://github.com/openai/triton/pulls?q=is%3Apr+rocm+

For the recent PyTorch 2.0 release, we were using a fork of triton. https://github.com/ROCmSoftwarePlatform/triton/releases/tag/pytorch-triton-rocm-v2.0.1.

This v2.0.1 tag corresponds to the wheels that can be downloaded from pypi.
https://pypi.org/project/pytorch-triton-rocm/2.0.1/#files

oraluben pushed a commit to oraluben/triton that referenced this issue Sep 11, 2024
* Add matrix vector multiplication tutorial.

* Fix: resolve review comment

* Add test for: torch.matmul (reshape x to 2D), torch.matmul (transpose weight and flip the order), torch.nn.Linear

* Change duplicated line_styles
gglin001 pushed a commit to gglin001/triton that referenced this issue Nov 13, 2024
* Add matrix vector multiplication tutorial.

* Fix: resolve review comment

* Add test for: torch.matmul (reshape x to 2D), torch.matmul (transpose weight and flip the order), torch.nn.Linear

* Change duplicated line_styles
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants