Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Enable AMD BF16 Grouped Gemm #3526

Open
wants to merge 3 commits into
base: main
Choose a base branch
from
Open

Conversation

jwfromm
Copy link
Contributor

@jwfromm jwfromm commented Dec 22, 2024

Summary: Implementation of CK based BF16 Grouped Gemm. Currently performance is quite poor :(

Reviewed By: zjing14

Differential Revision: D67261862

Josh Fromm and others added 3 commits December 20, 2024 13:44
Summary:
Pull Request resolved: pytorch#3522

X-link: facebookresearch/FBGEMM#603

I previously assumed that using hipmemcpy would be more efficient than launching many kernels that directly set gpu memory. This assumption is apparently (and very surprisingly) untrue. It seems the the multi-kernel-launch approach reduces overhead considerably, giving a 10% speedup.

Differential Revision: D67531231
Summary:
This diff improves the kernel setup for cutlass bf16 grouped gemm and makes it more compatible with eager mode.

Current status: Dynamic mode working, need to add eager mode.

Differential Revision: D67423469
Summary: Implementation of CK based BF16 Grouped Gemm. Currently performance is quite poor :(

Reviewed By: zjing14

Differential Revision: D67261862
Copy link

netlify bot commented Dec 22, 2024

Deploy Preview for pytorch-fbgemm-docs ready!

Name Link
🔨 Latest commit 6ae3c7a
🔍 Latest deploy log https://app.netlify.com/sites/pytorch-fbgemm-docs/deploys/67686b1e38b7ac00084b9212
😎 Deploy Preview https://deploy-preview-3526--pytorch-fbgemm-docs.netlify.app
📱 Preview on mobile
Toggle QR Code...

QR Code

Use your smartphone camera to open QR code link.

To edit notification comments on pull requests, go to your Netlify site configuration.

@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D67261862

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants