We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
(base) PS E:\Games\llama.cpp> python3 convert.py OPT-13B-Erebus-4bit-128g.safetensors --outtype q4_1 --outfile 4ggml.bin Loading model file OPT-13B-Erebus-4bit-128g.safetensors Loading vocab file tokenizer.model Traceback (most recent call last): File "E:\Games\llama.cpp\convert.py", line 1147, in main() File "E:\Games\llama.cpp\convert.py", line 1137, in main model = do_necessary_conversions(model) File "E:\Games\llama.cpp\convert.py", line 983, in do_necessary_conversions model = convert_transformers_to_orig(model) File "E:\Games\llama.cpp\convert.py", line 588, in convert_transformers_to_orig out["tok_embeddings.weight"] = model["model.embed_tokens.weight"] KeyError: 'model.embed_tokens.weight' (base) PS E:\Games\llama.cpp>
Model is from here: https://huggingface.co/notstoic/OPT-13B-Erebus-4bit-128g
The text was updated successfully, but these errors were encountered:
I don't think OPT 13B is currently supported.
Sorry, something went wrong.
This issue was closed because it has been inactive for 14 days since being marked as stale.
No branches or pull requests
(base) PS E:\Games\llama.cpp> python3 convert.py OPT-13B-Erebus-4bit-128g.safetensors --outtype q4_1 --outfile 4ggml.bin
Loading model file OPT-13B-Erebus-4bit-128g.safetensors
Loading vocab file tokenizer.model
Traceback (most recent call last):
File "E:\Games\llama.cpp\convert.py", line 1147, in
main()
File "E:\Games\llama.cpp\convert.py", line 1137, in main
model = do_necessary_conversions(model)
File "E:\Games\llama.cpp\convert.py", line 983, in do_necessary_conversions
model = convert_transformers_to_orig(model)
File "E:\Games\llama.cpp\convert.py", line 588, in convert_transformers_to_orig
out["tok_embeddings.weight"] = model["model.embed_tokens.weight"]
KeyError: 'model.embed_tokens.weight'
(base) PS E:\Games\llama.cpp>
Model is from here: https://huggingface.co/notstoic/OPT-13B-Erebus-4bit-128g
The text was updated successfully, but these errors were encountered: