Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

llama.cpp runs WizardLM 7B #1402

Closed
rozek opened this issue May 11, 2023 · 6 comments
Closed

llama.cpp runs WizardLM 7B #1402

rozek opened this issue May 11, 2023 · 6 comments
Labels

Comments

@rozek
Copy link

rozek commented May 11, 2023

Just for the records: the new WizardLM model seems to run fine with llama.cpp, just download file wizardLM-7B.ggml.q4_0.bin from Huggingface

@ggerganov
Copy link
Member

Feel free to add a reference in the README

@ekryski
Copy link

ekryski commented May 13, 2023

Can confirm.

@Alumniminium
Copy link

There's also Wizard-Vicuna, which is the best model ever.

@aseok
Copy link

aseok commented May 15, 2023

Pls send link.

@zacharyfmarion
Copy link

@rozek what exactly did you do to quantize? Which files do I need to download to do it myself?

@github-actions github-actions bot added the stale label Mar 25, 2024
Copy link
Contributor

github-actions bot commented Apr 9, 2024

This issue was closed because it has been inactive for 14 days since being marked as stale.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

6 participants