Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to implement a gemm with FP16 and INT4 using kernel in FasterTransformer/src/fastertransformer/kernels/cutlass_kernels/fpA_intB_gemm #794

Open
AkatsukiChiri opened this issue Jul 26, 2024 · 0 comments

Comments

@AkatsukiChiri
Copy link

AkatsukiChiri commented Jul 26, 2024

I am trying to implement a GEMM with FP16 and INT4. I hope to call the fpA_intB_gemm_fp16_int4 kernel located in FasterTransformer/src/fastertransformer/kernels/cutlass_kernels/fpA_intB_gemm, but I see that the examples are all implementations for model inference. If I only want to reproduce the GEMM kernel, what should I do?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant