-
Notifications
You must be signed in to change notification settings - Fork 65
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Llama2 execution on AMD CPU - not getting any results #101
Comments
If you running llama2 in the same way like you are running on NPU, just by changing the NPU run |
Thanks for the reply @uday610 |
This 1.1 flow is obsoleted because we have flow with 1.2 now. Hence closing this issue |
Updates to merge changes from main to dev branch
Updates to merge changes from main to dev branch
Hello,
I am trying to run Llama2 model on AMD by accelerating it on CPU, but it keeps on running for more than an hour and gets stuck at "warmup" stage. For AIE it runs within couple of minutes.
Is there any way to speed up the process of execution for CPU or has anyone executed it on CPU who can tell how many hours did it actually take to run the model dedicatedly on CPU?
Thanks,
Ashima
The text was updated successfully, but these errors were encountered: