We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
/path/to/llama_model_weights ├── 7B │ ├── checklist.chk │ ├── consolidated.00.pth │ └── params.json └── tokenizer.model
I just want to run the fineturn code.
import cv2 import llama import torch from PIL import Image
device = "cuda" if torch.cuda.is_available() else "cpu"
llama_dir = "/path/to/LLaMA/"
model, preprocess = llama.load("BIAS-7B", llama_dir, llama_type="7B", device=device) model.eval()
prompt = llama.format_prompt("Please introduce this painting.") img = Image.fromarray(cv2.imread("../docs/logo_v1.png")) img = preprocess(img).unsqueeze(0).to(device)
result = model.generate(img, [prompt])[0]
print(result)
The text was updated successfully, but these errors were encountered:
You need to request LLaMA's official weights. Our weights such as BIAS-7B will be automatically downloaded.
BIAS-7B
Sorry, something went wrong.
您好,问题解决了吗 我也是同样的问题 找不到BIAS-7B的params.json 和 tokenizer.model文件
No branches or pull requests
/path/to/llama_model_weights
├── 7B
│ ├── checklist.chk
│ ├── consolidated.00.pth
│ └── params.json
└── tokenizer.model
I just want to run the fineturn code.
import cv2
import llama
import torch
from PIL import Image
device = "cuda" if torch.cuda.is_available() else "cpu"
llama_dir = "/path/to/LLaMA/"
choose from BIAS-7B, LORA-BIAS-7B, LORA-BIAS-7B-v21
model, preprocess = llama.load("BIAS-7B", llama_dir, llama_type="7B", device=device)
model.eval()
prompt = llama.format_prompt("Please introduce this painting.")
img = Image.fromarray(cv2.imread("../docs/logo_v1.png"))
img = preprocess(img).unsqueeze(0).to(device)
result = model.generate(img, [prompt])[0]
print(result)
The text was updated successfully, but these errors were encountered: