-
Notifications
You must be signed in to change notification settings - Fork 375
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
The true weight file not find? #131
Comments
It seems that you forgot to pass the
|
these code just download the base model,but it not provide tokenizer.model and params.json and so on! _MODELS = { def available_models(): def load(name, llama_dir, llama_type="7B", device="cuda" if torch.cuda.is_available() else "cpu", download_root='ckpts', max_seq_len=512,
|
你好 我也遇到同样的问题 找不到BIAS-7B 模型的相关tokenizer.model 和 params.json文件 ,请问您解决了嘛 我的q 909865905 将不甚感激 |
抱歉,我也没有解决 |
when I run the code ".pytho demo.py", the weight not same as the provided one
100%|███████████████████████████████████████| 241M/241M [01:38<00:00, 2.55MiB/s]
Loading LLaMA-Adapter from ckpts/7fa55208379faf2dd862565284101b0e4a2a72114d6490a95e432cf9d9b6c813_BIAS-7B.pth
Traceback (most recent call last):
File "demo.py", line 11, in
model, preprocess = llama.load("BIAS-7B", llama_dir, device)
File "/root/autodl-tmp/LLaMA-Adapter/llama_adapter_v2_multimodal7b/llama/llama_adapter.py", line 309, in load
model = LLaMA_adapter(
File "/root/autodl-tmp/LLaMA-Adapter/llama_adapter_v2_multimodal7b/llama/llama_adapter.py", line 30, in init
with open(os.path.join(llama_ckpt_dir, "params.json"), "r") as f:
FileNotFoundError: [Errno 2] No such file or directory: '/path/to/LLaMA/cpu/params.json'
The text was updated successfully, but these errors were encountered: