Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Inference on the model #1

Open
ajaysurya1221 opened this issue Mar 6, 2023 · 7 comments
Open

Inference on the model #1

ajaysurya1221 opened this issue Mar 6, 2023 · 7 comments

Comments

@ajaysurya1221
Copy link

Hi, could someone shed some light on how this model can be loaded and used for inference? I know this is early and everybody might be a little vague on this but still, only for educational purposes.

@ajaysurya1221 ajaysurya1221 changed the title Infernce on the model Inference on the model Mar 6, 2023
@shawwn
Copy link
Owner

shawwn commented Mar 7, 2023

Hiya. Yes, right this way: https://twitter.com/rowancrowe/status/1632676722612269057

Basically, clone https://github.com/shawwn/llama and use that for inferencing instead.

Note that it's using FP16 weights, not int8, so the memory requirements are 2x of the int8 quantized model. But personally I'm skeptical that the model can be quantized to int8 without harming its performance, and I don't need it anyway. Maybe I'll make it an option, but until then, you might want to try https://github.com/tloen/llama-int8 instead. (Note that you'll probably need to merge my improved sampler if you're seeing repetitive, low-quality outputs.)

Also note that the repo is set up to use a context window of 2048, which will probably run out of memory on most video cards. So change "2048" to "512" in model.py if needed. (I'm not sure why this causes an OOM, since the default in example.py is 512, but I have no way to reproduce the bug.

Have fun!

@johndpope
Copy link

Hey Shawn, not relevant - but would be cool to wire up this somehow
https://github.com/patrikzudel/PatrikZeros-ChatGPT-API-UI

@randaller
Copy link

Run it on home desktop PC: https://github.com/randaller/llama-chat

@jorahn
Copy link

jorahn commented Mar 10, 2023

Note that it's using FP16 weights, not int8, so the memory requirements are 2x of the int8 quantized model. But personally I'm skeptical that the model can be quantized to int8 without harming its performance, and I don't need it anyway. Maybe I'll make it an option, but until then, you might want to try https://github.com/tloen/llama-int8 instead. (Note that you'll probably need to merge my improved sampler if you're seeing repetitive, low-quality outputs.)

this is implemented here: https://github.com/jorahn/llama-int8

@Straafe
Copy link

Straafe commented Mar 10, 2023

@jorahn Nice, 13B working on my 3090

@randaller
Copy link

randaller commented Mar 11, 2023

Hi @shawwn, I've implemented your repetion_penalty and top_k sampler in my repo (https://github.com/randaller/llama-chat) and it works great, so I just would like to say Thank you very much!!!

@G2G2G2G
Copy link

G2G2G2G commented Mar 12, 2023

ggerganov/llama.cpp#23

ggerganov/llama.cpp#20

contributing to this project with chat would enable people to run it on basically any web server (assuming they had enough RAM) 7B only uses ~4gb

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

7 participants