Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Deprecate attention patching for llama #1047

Merged
merged 8 commits into from
Mar 21, 2024

Conversation

dakinggg
Copy link
Collaborator

@dakinggg dakinggg commented Mar 21, 2024

Now that flash attention 2 is integrated into transformers, we don't need to monkeypatch llama.

Also changes our versioned deprecation warning to a userwarning so that it shows up in the logs.

See llama-patch-triton-after-6-MYjOo9 for a run that sets attention patch type, and changes to flash attention 2 and emits the warning.

@dakinggg dakinggg marked this pull request as ready for review March 21, 2024 18:19
@dakinggg dakinggg requested a review from irenedea March 21, 2024 18:20
Copy link
Contributor

@irenedea irenedea left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thanks!

@dakinggg dakinggg enabled auto-merge (squash) March 21, 2024 18:33
@dakinggg dakinggg merged commit 3348b59 into mosaicml:main Mar 21, 2024
10 checks passed
KuuCi pushed a commit that referenced this pull request Apr 18, 2024
@dakinggg dakinggg deleted the remove-triton-patch branch June 22, 2024 20:47
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants