Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

KeyError: 'model.embed_tokens.weight' when converting .safetensors to ggml #1000

Closed
Jake36921 opened this issue Apr 15, 2023 · 2 comments
Closed
Labels

Comments

@Jake36921
Copy link

(base) PS E:\Games\llama.cpp> python3 convert.py OPT-13B-Erebus-4bit-128g.safetensors --outtype q4_1 --outfile 4ggml.bin
Loading model file OPT-13B-Erebus-4bit-128g.safetensors
Loading vocab file tokenizer.model
Traceback (most recent call last):
File "E:\Games\llama.cpp\convert.py", line 1147, in
main()
File "E:\Games\llama.cpp\convert.py", line 1137, in main
model = do_necessary_conversions(model)
File "E:\Games\llama.cpp\convert.py", line 983, in do_necessary_conversions
model = convert_transformers_to_orig(model)
File "E:\Games\llama.cpp\convert.py", line 588, in convert_transformers_to_orig
out["tok_embeddings.weight"] = model["model.embed_tokens.weight"]
KeyError: 'model.embed_tokens.weight'
(base) PS E:\Games\llama.cpp>

Model is from here: https://huggingface.co/notstoic/OPT-13B-Erebus-4bit-128g

@jon-chuang
Copy link
Contributor

I don't think OPT 13B is currently supported.

@github-actions github-actions bot added the stale label Mar 25, 2024
Copy link
Contributor

github-actions bot commented Apr 9, 2024

This issue was closed because it has been inactive for 14 days since being marked as stale.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants