Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Unity][Cutlass] Fix C source generation of dense operation #16476

Merged
merged 1 commit into from
Apr 30, 2024

Conversation

creaitr
Copy link
Contributor

@creaitr creaitr commented Jan 26, 2024

It seems there is an issue while generating c source of dense operation using cutlass.
Even though the dense operation contains bias parameters, the generated c code doesn't reflect that bias correctly.

Please check the change and leave feedback.

This commit fixes an issue that generates wrong c sources of dense operation using cutlass.
Copy link
Member

@vinx13 vinx13 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

please add a test case

@@ -566,7 +566,10 @@ def get_flattened_batch_dim(arg_name, batch_rank):
transposed = "transposed" in func_name or "dense" in func_name
lhs_arg_idx = _get_optional_int_annotation(annotations, "lhs_arg_idx", 0)
rhs_arg_idx = _get_optional_int_annotation(annotations, "rhs_arg_idx", 1)
bias_arg_idx = _get_optional_int_annotation(annotations, "bias_arg_idx", None)
if "bias" in func_name:
bias_arg_idx = _get_optional_int_annotation(annotations, "bias_arg_idx", 2)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if bias is used in the pattern (func name), it should exist in the annotation. cc @yelite

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe there is pattern with bias but doesn't have annotation? IIUC this code path is shared between relax and relay. @creaiter can you share sample where the bias parameter isn't generated correctly?

Copy link
Contributor Author

@creaitr creaitr Jan 31, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@yelite
Sorry for the delayed response.

I tested this before the merge of unity branch into main.
When I tried to rebuild tvm with the newest branch, there was an issue from the cutlass_fpA_intB_gemm 3rdparty, that error: identifier "__hfma2" is undefined.
This error seems related with the setting of the compute compatibility version of nvcc, and I temporally resolved it by adding the code set(CMAKE_CUDA_ARCHITECTURES "native") at the tvm/CMakeLists.txt file. (Figuring out this took some time.)
Here is a minor question that have you encountered the same error and how did you resolve it?

For the bias issue of dense operation, when I tried to compile the below module with cutlass backend,

def dense_add_bias(data, weight=None, bias=None, units=None, **kwargs):
    name = kwargs.get("name")
    kwargs.pop("name")
    if not weight:
        weight = relay.var(name + "_weight")
    if not bias:
        bias = relay.var(name + "_bias")
    data = relay.nn.dense(data, weight, units, **kwargs)
    data = relay.nn.bias_add(data, bias, axis=-1)
    return data

there were some errors at the generated .cu file.
At the following generated code, I'm suspicious about the ${bias_arg} parts.

void tvmgen_default_cutlass_main_3_(DLTensor* cutlass_3_i0, DLTensor* cutlass_3_i1, DLTensor* cutlass_3_i2, DLTensor* out0){

  using ElementInputA = float;
  using ElementInputB = float;
  using ElementOutput = float;
  using ElementComputeEpilogue = float;

  // ... omitted

  using Gemm = Operation_cutlass_tensorop_s1688gemm_128x128_16x4_tn_align1;
  int M = 1;
  int N = 2048;
  int K = 1024;
  cutlass::gemm::GemmCoord problem_size(M, N, K);
  ElementComputeEpilogue alpha = ElementComputeEpilogue(1);
  ElementComputeEpilogue beta = ElementComputeEpilogue(0);
  void* ptr_a = (void*)(cutlass_3_i0->data);
  void* ptr_b = (void*)(cutlass_3_i1->data);
  void* ptr_bias = (void*)(${bias_arg}->data);

  void* ptr_out = (void*)(out0->data);
  
  typename Gemm::Arguments arguments{
   problem_size,
   {static_cast<ElementInputA*>(ptr_a), K}, 
   {static_cast<ElementInputB*>(ptr_b), K}, 
   {static_cast<ElementOutput*>(ptr_bias), (${bias_arg}->ndim == 1 || ${bias_arg}->shape[${bias_arg}->ndim - 2] == 1) ? 0 : N}, 
   {static_cast<ElementOutput*>(ptr_out), N}, 
   {alpha},
   1
  };

  // ... omitted
}

After applying this PR, above error was resolved for me.

Actually, I'm not familiar with this tvm project. So you can feedback me, if what I changed is the wrong part.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

https://github.com/apache/tvm/blob/main/cmake/modules/CUDA.cmake#L47
We have a default value CMAKE_CUDA_ARTECTURES here, does that work? Right now it requires cuda arch >= 53 to compile

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@vinx13
It seems test_cutlass.py#L549 is testing the dense_bias modules.
If the test case means dffierent thing, please let me know.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh, now I see the CUDA.cmake#L47.

In my case, the CMAKE_CUDA_ARCHITECTURES is already defined as 52. Thus, it is not changed to native.
(I compiled with cmake version 3.28.1)

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if bias is used in the pattern (func name), it should exist in the annotation. cc @yelite

It seems bias_arg_idx only annotated in relax, so partition_for_cutlass is broken in relay

@vinx13 vinx13 merged commit a320b63 into apache:main Apr 30, 2024
24 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants