-
Notifications
You must be signed in to change notification settings - Fork 19.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Enhance the robustness of the flash attention check #20495
Conversation
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## master #20495 +/- ##
==========================================
- Coverage 82.09% 82.06% -0.03%
==========================================
Files 515 515
Lines 47575 47615 +40
Branches 7463 8531 +1068
==========================================
+ Hits 39056 39077 +21
- Misses 6710 6721 +11
- Partials 1809 1817 +8
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. |
This PR should be ready to run on GPU CI again now. I’m wondering if it’s possible to use a newer GPU for Keras GPU CI, as flash attention isn't available on T4, and these tests are currently being skipped. Colab: https://colab.research.google.com/drive/1-fQdyAs-w5lM7ZGN8mroWmQP9HBJxYcK?usp=sharing EDIT: BTW, I saw the announcement. Good luck and best wishes, @fchollet! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, tank you!
The assert in coverage.py 7.6.5 is now fixed and released as part of coverage 7.6.6. |
* Enhance the robustness of the flash attention check. * Fix CI * Fix CI again * Fix GPU CI again and again... * No raise in tests * Pin coverage==7.6.1 * Fix the comment
Also fixes GPU CI.