-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
enable initial build for rocm for fp6_llm #1
enable initial build for rocm for fp6_llm #1
Conversation
} | ||
if use_cuda and not IS_ROCM: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What would be the compile flags for rocm?
@@ -107,17 +108,35 @@ def get_extensions(): | |||
extensions_cuda_dir = os.path.join(extensions_dir, "cuda") | |||
cuda_sources = list(glob.glob(os.path.join(extensions_cuda_dir, "**/*.cu"), recursive=True)) | |||
|
|||
if use_cuda: | |||
extensions_hip_dir = os.path.join(extensions_dir, "cuda", "fp6_llm") | |||
hip_sources = list(glob.glob(os.path.join(extensions_hip_dir, "*.cu"), recursive=True)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
seems fine for now, but we might have other extensions other than ".cu" later?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes, once we enable all the extensions - we can remove hip_sources and start using cuda_sources directly.
if IS_ROCM and use_cuda: | ||
sources += hip_sources | ||
|
||
## TODO: remove this condition and use what we have in CUDA once we fix the individual builds. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
understand we are wip, might be able to consolidate the code later?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes
One other question: what would be the entry point for hipify_torch? |
also I've created an staging branch for this PR : |
Updated the PR to use rocm_enablement_staging. That's a good idea to use main for syncing with upstream. |
Entry points are CUDAExtension. Torch code has integrated hipify_torch as part of CUDAExtensions. |
Initial enablement task for FP6_llm.