Credits: Large parts of the code are based on the PR by Jason Phang. Thank you for your hard work!
- Do you also want a "private GPT-3" at home?
- It also annoys you that people on the internet are excited about "llama weights" and yet there is no interface or any guide for how to use them?
- You also sick of dealing with all kinds of people on the Internet who play around with tensors then upload a code that no one can really use?
I prepared a single repo for you with EVERYTHING you need to run LLaMA.
Here is Everything you need for running (and training!) LLaMA using Hugging Face interface 👌
tokenizer = llama.LLaMATokenizer.from_pretrained('decapoda-research/llama-7b-hf')
model = llama.LLaMAForCausalLM.from_pretrained('decapoda-research/llama-7b-hf')
print(tokenizer.decode(model.generate(tokenizer('Yo mama', return_tensors = "pt")["input_ids"])[0]))
Yeah. No overengineering bullshit.
Also: No need to clone a huge custom
transformers
repo that you later on stuck with maintaining and updating yourself.
TL;DR: GPT model by meta that surpasses GPT-3, released to selected researchers but leaked to the public.
LLaMA is a large language model trained by Meta AI that surpasses GPT-3 in terms of accuracy and efficiency while being 10 times smaller.
Paper Abstract:
We introduce LLaMA, a collection of founda- tion language models ranging from 7B to 65B parameters. We train our models on trillions of tokens, and show that it is possible to train state-of-the-art models using publicly available datasets exclusively, without resorting to proprietary and inaccessible datasets. In particular, LLaMA-13B outperforms GPT-3 (175B) on most benchmarks, and LLaMA-65B is competitive with the best models, Chinchilla- 70B and PaLM-540B. We release all our models to the research community.
git clone https://github.com/ypeleg/llama
import llama
MODEL = 'decapoda-research/llama-7b-hf'
- Options for
MODEL
:decapoda-research/llama-7b-hf
decapoda-research/llama-13b-hf
decapoda-research/llama-30b-hf
decapoda-research/llama-65b-hf
Note: The model size is the number of parameters in the model. The larger the model, the more accurate the model is, but the slower, heavier and more expensive it is to run.
tokenizer = llama.LLaMATokenizer.from_pretrained(MODEL)
model = llama.LLaMAForCausalLM.from_pretrained(MODEL)
model.to('cuda')
For example, we will use the prompt: "Yo mama"
We will use the
tokenizer
to encode the prompt into a tensor of integers.
PROMPT = 'Yo mama'
encoded = tokenizer(PROMPT, return_tensors = "pt")
We will use the
model
to generate the output.
generated = model.generate(encoded["input_ids"].cuda())[0])
decoded = tokenizer.decode(generated)
print(decoded)
Expected output: "Yo mama is so fat, she has to buy two seats on the plane."