Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bug: phi-3-mini-4k-it July update failing to load. #8845

Closed
hexbinoct opened this issue Aug 3, 2024 · 3 comments
Closed

Bug: phi-3-mini-4k-it July update failing to load. #8845

hexbinoct opened this issue Aug 3, 2024 · 3 comments
Labels
bug-unconfirmed high severity Used to report high severity bugs in llama.cpp (Malfunctioning hinder important workflow)

Comments

@hexbinoct
Copy link

hexbinoct commented Aug 3, 2024

What happened?

i am trying to load the phi-3-mini july update model as usual but its giving me the following error:

llama_model_load: error loading model: error loading model hyperparameters: key not found in model: phi3.attention.sliding_window
llama_load_model_from_file: failed to load model
llama_init_from_gpt_params: error: failed to load model '.\models\me\phi-3-mini-4k-it-July-5\Phi-3.1-mini-4k-instruct-Q8_0_L.gguf'
main: error: unable to load model

Also, phi-2 and phi-3 original model still work! If its worth knowing, i have also downloaded the latest version of LM Studio, and its also unable to run this same model, throwing the same error.

Name and Version

PS F:\ai3> .\llama.cpp\build\bin\Release\llama-cli.exe --version
version: 3505 (b72c20b)
built with MSVC 19.40.33811.0 for x64

What operating system are you seeing the problem on?

Windows

Relevant log output

PS F:\ai3> .\llama.cpp\build\bin\Release\llama-cli.exe -m .\models\me\phi-3-mini-4k-it-July-5\Phi-3.1-mini-4k-instruct-Q8_0_L.gguf -if -p "hello"
Log start
main: build = 3505 (b72c20b8)
main: built with MSVC 19.40.33811.0 for x64
main: seed  = 1722688170
llama_model_loader: loaded meta data with 30 key-value pairs and 195 tensors from .\models\me\phi-3-mini-4k-it-July-5\Phi-3.1-mini-4k-instruct-Q8_0_L.gguf (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = phi3
llama_model_loader: - kv   1:                               general.name str              = Phi3
llama_model_loader: - kv   2:                        phi3.context_length u32              = 4096
llama_model_loader: - kv   3:  phi3.rope.scaling.original_context_length u32              = 4096
llama_model_loader: - kv   4:                      phi3.embedding_length u32              = 3072
llama_model_loader: - kv   5:                   phi3.feed_forward_length u32              = 8192
llama_model_loader: - kv   6:                           phi3.block_count u32              = 32
llama_model_loader: - kv   7:                  phi3.attention.head_count u32              = 32
llama_model_loader: - kv   8:               phi3.attention.head_count_kv u32              = 32
llama_model_loader: - kv   9:      phi3.attention.layer_norm_rms_epsilon f32              = 0.000010
llama_model_loader: - kv  10:                  phi3.rope.dimension_count u32              = 96
llama_model_loader: - kv  11:                        phi3.rope.freq_base f32              = 10000.000000
llama_model_loader: - kv  12:                          general.file_type u32              = 7
llama_model_loader: - kv  13:                       tokenizer.ggml.model str              = llama
llama_model_loader: - kv  14:                         tokenizer.ggml.pre str              = default
llama_model_loader: - kv  15:                      tokenizer.ggml.tokens arr[str,32064]   = ["<unk>", "<s>", "</s>", "<0x00>", "<...
llama_model_loader: - kv  16:                      tokenizer.ggml.scores arr[f32,32064]   = [-1000.000000, -1000.000000, -1000.00...
llama_model_loader: - kv  17:                  tokenizer.ggml.token_type arr[i32,32064]   = [3, 3, 4, 6, 6, 6, 6, 6, 6, 6, 6, 6, ...
llama_model_loader: - kv  18:                tokenizer.ggml.bos_token_id u32              = 1
llama_model_loader: - kv  19:                tokenizer.ggml.eos_token_id u32              = 32000
llama_model_loader: - kv  20:            tokenizer.ggml.unknown_token_id u32              = 0
llama_model_loader: - kv  21:            tokenizer.ggml.padding_token_id u32              = 32000
llama_model_loader: - kv  22:               tokenizer.ggml.add_bos_token bool             = false
llama_model_loader: - kv  23:               tokenizer.ggml.add_eos_token bool             = false
llama_model_loader: - kv  24:                    tokenizer.chat_template str              = {% for message in messages %}{% if me...
llama_model_loader: - kv  25:               general.quantization_version u32              = 2
llama_model_loader: - kv  26:                      quantize.imatrix.file str              = /models/Phi-3.1-mini-4k-instruct-GGUF...
llama_model_loader: - kv  27:                   quantize.imatrix.dataset str              = /training_data/calibration_datav3.txt
llama_model_loader: - kv  28:             quantize.imatrix.entries_count i32              = 128
llama_model_loader: - kv  29:              quantize.imatrix.chunks_count i32              = 151
llama_model_loader: - type  f32:   65 tensors
llama_model_loader: - type  f16:    2 tensors
llama_model_loader: - type q8_0:  128 tensors
llama_model_load: error loading model: error loading model hyperparameters: key not found in model: phi3.attention.sliding_window
llama_load_model_from_file: failed to load model
llama_init_from_gpt_params: error: failed to load model '.\models\me\phi-3-mini-4k-it-July-5\Phi-3.1-mini-4k-instruct-Q8_0_L.gguf'
main: error: unable to load model
PS F:\ai3> .\llama.cpp\build\bin\Release\llama-cli.exe --version
version: 3505 (b72c20b8)
built with MSVC 19.40.33811.0 for x64
PS F:\ai3>
@hexbinoct hexbinoct added bug-unconfirmed high severity Used to report high severity bugs in llama.cpp (Malfunctioning hinder important workflow) labels Aug 3, 2024
@ThiloteE
Copy link
Contributor

ThiloteE commented Aug 3, 2024

Might be related to #8627

@ngxson
Copy link
Collaborator

ngxson commented Aug 3, 2024

maybe you're using an old gguf file. try re-converting it

@hexbinoct
Copy link
Author

ok so i got all the repo files from here again : https://huggingface.co/microsoft/Phi-3-mini-4k-instruct/tree/main
converted to f16 gguf, and then quantized to q8_0 and now its working! thanks. and also i am on version : 3520 (d3f0c71)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug-unconfirmed high severity Used to report high severity bugs in llama.cpp (Malfunctioning hinder important workflow)
Projects
None yet
Development

No branches or pull requests

3 participants