We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
用户在昇腾环境上验证DeepLink能力时遇到问题,以下为原话。 ---------------------邮件原文---------------------
我看DeepLink 官方社区是接入昇腾芯片的,并且是适配过llama 模型的。在昇腾环境上验证DeepLink能力时,在部署Llama2模型环境时遇到flash_attn无法安装,有如下报错:
发现flash_attn与cuda强相关,阻塞了验证流程,需要做类似迁移的操作。请问你们在验证时有遇到类似大模型用到与芯片架构强相关的库的问题吗?如有可以提供下解决方案吗
The text was updated successfully, but these errors were encountered:
参考这个回答 #1152
Sorry, something went wrong.
No branches or pull requests
用户在昇腾环境上验证DeepLink能力时遇到问题,以下为原话。
---------------------邮件原文---------------------
我看DeepLink 官方社区是接入昇腾芯片的,并且是适配过llama 模型的。在昇腾环境上验证DeepLink能力时,在部署Llama2模型环境时遇到flash_attn无法安装,有如下报错:
发现flash_attn与cuda强相关,阻塞了验证流程,需要做类似迁移的操作。请问你们在验证时有遇到类似大模型用到与芯片架构强相关的库的问题吗?如有可以提供下解决方案吗
The text was updated successfully, but these errors were encountered: