Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

昇腾环境上在部署Llama2模型环境时遇到flash_attn无法安装 #1153

Open
MiaoYYu opened this issue Apr 23, 2024 · 1 comment
Open
Labels

Comments

@MiaoYYu
Copy link
Contributor

MiaoYYu commented Apr 23, 2024

用户在昇腾环境上验证DeepLink能力时遇到问题,以下为原话。
---------------------邮件原文---------------------

我看DeepLink 官方社区是接入昇腾芯片的,并且是适配过llama 模型的。在昇腾环境上验证DeepLink能力时,在部署Llama2模型环境时遇到flash_attn无法安装,有如下报错:

发现flash_attn与cuda强相关,阻塞了验证流程,需要做类似迁移的操作。请问你们在验证时有遇到类似大模型用到与芯片架构强相关的库的问题吗?如有可以提供下解决方案吗

image

@MiaoYYu MiaoYYu added the ascend label Apr 23, 2024
@yangbofun
Copy link
Collaborator

参考这个回答 #1152

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants