[Feature Enhancement] Add a hack extractor script to solve vmap and autocast #222
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
PR Category
Feature Enhancement
Description
提供一种方式去避开 vmap 和 autocast
https://github.com/huggingface/transformers/blob/6b5bd117231f969713ed79fd4870903ab3c93edf/docs/source/en/attention_interface.md?plain=1#L194-L195 transformers 库中本来就提供了一种 sdpa_mask_without_vmap(我看了vma
p 的版本今年5月份的时候新增的,貌似和flex attention有关,可能性能原因吧),目的是为了模型导出,虽然那个是torch export,对于提取计算图来说,没有关系,拿来用了
autocast 在第一次trace之后的 FxGraph 上 trace 会进行打断,autocast 也不影响计算图,那就替换成一个空的上下文管理器
Important
模型只是为了在 ci 验证,可以不用合入我的模型,只是提了一个脚本