-
Notifications
You must be signed in to change notification settings - Fork 11.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
llama.cpp runs WizardLM 7B #1402
Labels
Comments
Feel free to add a reference in the README |
Can confirm. |
There's also Wizard-Vicuna, which is the best model ever. |
Pls send link. |
@rozek what exactly did you do to quantize? Which files do I need to download to do it myself? |
This issue was closed because it has been inactive for 14 days since being marked as stale. |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Just for the records: the new WizardLM model seems to run fine with llama.cpp, just download file wizardLM-7B.ggml.q4_0.bin from Huggingface
The text was updated successfully, but these errors were encountered: