Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Enable triton sparse gemm only for CUDA #27

Draft
wants to merge 1 commit into
base: rocm-jaxlib-v0.4.28-qa
Choose a base branch
from

Conversation

hsharsha
Copy link

@hsharsha hsharsha commented Jul 5, 2024

No description provided.

const HloDotInstruction* dot = Cast<HloDotInstruction>(hlo);
if (dot->sparse_operands()) {
return Unimplemented("Sparse dot is supported by Triton emitter only.");
}
#endif
Copy link

@i-chaochen i-chaochen Jul 5, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IIUC, sparse dot will require AddGemmFusionAutotuningPasses and that not only requires triton auotune, but also cudnn fusion front as well.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is to unblock JAX to use sparse dot operation. https://github.com/ROCm/frameworks-internal/issues/8118

@hsharsha hsharsha marked this pull request as draft July 8, 2024 17:00
@hsharsha
Copy link
Author

hsharsha commented Jul 8, 2024

Converting to draft as we also need to address failing tests.

@@ -2921,10 +2921,12 @@ absl::StatusOr<llvm::Value*> ElementalIrEmitter::EmitElementalDot(
"Algorithm not supported by the ElementalIrEmitter: %s",
PrecisionConfig::Algorithm_Name(hlo->precision_config().algorithm())));
}
#ifdef GOOGLE_CUDA
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@hsharsha @i-chaochen
I believe better is to use something like ->

#ifndef TENSORFLOW_USE_ROCM

and also ->

local_defines = if_cuda_is_configured(["GOOGLE_CUDA=1"]) + if_rocm_is_configured(["TENSORFLOW_USE_ROCM=1"]),
 
to //xla/service:elemental_ir_emitter build target

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants