Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

imatrix-combine-only idea #10492

Merged
merged 2 commits into from
Nov 29, 2024
Merged

Conversation

robbiemu
Copy link
Contributor

@robbiemu robbiemu commented Nov 25, 2024

llama-imatrix supports --in-file combination, but requires the user to also still process at least one chunk during that final step of combination. (otherwise, we get output like this:

/Users/macdev/Downloads/build/bin/llama-imatrix --model /Users/Shared/Public/huggingface/salamandra-2b-instruct/salamandra-2b-instruct_bf16.gguf \
  --rope-freq-base 10000.0 --top-p 0.95 --temp 0 --repeat-penalty 1.2 \
   --ctx-size 8192 --n-gpu-layers 25 --no-ppl --verbosity 3 \
  --in-file /Users/Shared/Public/huggingface/salamandra-2b-instruct/imatrix/oscar/imatrix_500.to_933.dat \
  --in-file /Users/Shared/Public/huggingface/salamandra-2b-instruct/imatrix/oscar/imatrix_500.from_933.dat \
  -o /Users/Shared/Public/huggingface/salamandra-2b-instruct/imatrix/oscar/imatrix_500.dat --chunk 0
build: 3906 (7eee341b) with Apple clang version 15.0.0 (clang-1500.3.9.4) for arm64-apple-darwin23.6.0
main : loading imatrix from '/Users/Shared/Public/huggingface/salamandra-2b-instruct/imatrix/oscar/imatrix_500.to_933.dat'
main : loading imatrix from '/Users/Shared/Public/huggingface/salamandra-2b-instruct/imatrix/oscar/imatrix_500.from_933.dat'
main : saving combined imatrix to '/Users/Shared/Public/huggingface/salamandra-2b-instruct/imatrix/oscar/imatrix_500.dat'

save_imatrix: stored collected data after 0 chunks in /Users/Shared/Public/huggingface/salamandra-2b-instruct/imatrix/oscar/imatrix_500.dat
llama_model_loader: loaded meta data with 31 key-value pairs and 219 tensors from /Users/Shared/Public/huggingface/salamandra-2b-instruct/salamandra-2b-instruct_bf16.gguf (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = llama
llama_model_loader: - kv   1:                               general.type str              = model
llama_model_loader: - kv   2:                         general.size_label str              = 2.3B
llama_model_loader: - kv   3:                            general.license str              = apache-2.0
llama_model_loader: - kv   4:                               general.tags arr[str,1]       = ["text-generation"]
llama_model_loader: - kv   5:                          general.languages arr[str,36]      = ["bg", "ca", "code", "cs", "cy", "da"...
llama_model_loader: - kv   6:                          llama.block_count u32              = 24
llama_model_loader: - kv   7:                       llama.context_length u32              = 8192
llama_model_loader: - kv   8:                     llama.embedding_length u32              = 2048
llama_model_loader: - kv   9:                  llama.feed_forward_length u32              = 5440
llama_model_loader: - kv  10:                 llama.attention.head_count u32              = 16
llama_model_loader: - kv  11:              llama.attention.head_count_kv u32              = 16
llama_model_loader: - kv  12:                       llama.rope.freq_base f32              = 10000.000000
llama_model_loader: - kv  13:     llama.attention.layer_norm_rms_epsilon f32              = 0.000010
llama_model_loader: - kv  14:                          general.file_type u32              = 32
llama_model_loader: - kv  15:                           llama.vocab_size u32              = 256000
llama_model_loader: - kv  16:                 llama.rope.dimension_count u32              = 128
llama_model_loader: - kv  17:            tokenizer.ggml.add_space_prefix bool             = true
llama_model_loader: - kv  18:                       tokenizer.ggml.model str              = llama
llama_model_loader: - kv  19:                         tokenizer.ggml.pre str              = default
llama_model_loader: - kv  20:                      tokenizer.ggml.tokens arr[str,256000]  = ["<unk>", "<s>", "</s>", "<pad>", "<|...
llama_model_loader: - kv  21:                      tokenizer.ggml.scores arr[f32,256000]  = [-1000.000000, -1000.000000, -1000.00...
llama_model_loader: - kv  22:                  tokenizer.ggml.token_type arr[i32,256000]  = [3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, ...
llama_model_loader: - kv  23:                tokenizer.ggml.bos_token_id u32              = 1
llama_model_loader: - kv  24:                tokenizer.ggml.eos_token_id u32              = 2
llama_model_loader: - kv  25:            tokenizer.ggml.unknown_token_id u32              = 0
llama_model_loader: - kv  26:            tokenizer.ggml.padding_token_id u32              = 0
llama_model_loader: - kv  27:               tokenizer.ggml.add_bos_token bool             = true
llama_model_loader: - kv  28:               tokenizer.ggml.add_eos_token bool             = false
llama_model_loader: - kv  29:                    tokenizer.chat_template str              = {%- if not date_string is defined %}{...
llama_model_loader: - kv  30:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:   49 tensors
llama_model_loader: - type bf16:  170 tensors
llm_load_vocab: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect
llm_load_vocab: special tokens cache size = 104
llm_load_vocab: token to piece cache size = 1.8842 MB
llm_load_print_meta: format           = GGUF V3 (latest)
llm_load_print_meta: arch             = llama
llm_load_print_meta: vocab type       = SPM
llm_load_print_meta: n_vocab          = 256000
llm_load_print_meta: n_merges         = 0
llm_load_print_meta: vocab_only       = 0
llm_load_print_meta: n_ctx_train      = 8192
llm_load_print_meta: n_embd           = 2048
llm_load_print_meta: n_layer          = 24
llm_load_print_meta: n_head           = 16
llm_load_print_meta: n_head_kv        = 16
llm_load_print_meta: n_rot            = 128
llm_load_print_meta: n_swa            = 0
llm_load_print_meta: n_embd_head_k    = 128
llm_load_print_meta: n_embd_head_v    = 128
llm_load_print_meta: n_gqa            = 1
llm_load_print_meta: n_embd_k_gqa     = 2048
llm_load_print_meta: n_embd_v_gqa     = 2048
llm_load_print_meta: f_norm_eps       = 0.0e+00
llm_load_print_meta: f_norm_rms_eps   = 1.0e-05
llm_load_print_meta: f_clamp_kqv      = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale    = 0.0e+00
llm_load_print_meta: n_ff             = 5440
llm_load_print_meta: n_expert         = 0
llm_load_print_meta: n_expert_used    = 0
llm_load_print_meta: causal attn      = 1
llm_load_print_meta: pooling type     = 0
llm_load_print_meta: rope type        = 0
llm_load_print_meta: rope scaling     = linear
llm_load_print_meta: freq_base_train  = 10000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_ctx_orig_yarn  = 8192
llm_load_print_meta: rope_finetuned   = unknown
llm_load_print_meta: ssm_d_conv       = 0
llm_load_print_meta: ssm_d_inner      = 0
llm_load_print_meta: ssm_d_state      = 0
llm_load_print_meta: ssm_dt_rank      = 0
llm_load_print_meta: ssm_dt_b_c_rms   = 0
llm_load_print_meta: model type       = ?B
llm_load_print_meta: model ftype      = BF16
llm_load_print_meta: model params     = 2.25 B
llm_load_print_meta: model size       = 4.20 GiB (16.00 BPW) 
llm_load_print_meta: general.name     = n/a
llm_load_print_meta: BOS token        = 1 '<s>'
llm_load_print_meta: EOS token        = 2 '</s>'
llm_load_print_meta: UNK token        = 0 '<unk>'
llm_load_print_meta: PAD token        = 0 '<unk>'
llm_load_print_meta: LF token         = 145 '<0x0A>'
llm_load_print_meta: EOT token        = 5 '<|im_end|>'
llm_load_print_meta: EOG token        = 2 '</s>'
llm_load_print_meta: EOG token        = 5 '<|im_end|>'
llm_load_print_meta: max token length = 72
llm_load_tensors: ggml ctx size =    0.20 MiB
ggml_backend_metal_log_allocated_size: allocated buffer, size =  4298.41 MiB, ( 4298.48 / 40960.00)
llm_load_tensors: offloading 24 repeating layers to GPU
llm_load_tensors: offloading non-repeating layers to GPU
llm_load_tensors: offloaded 25/25 layers to GPU
llm_load_tensors:      Metal buffer size =  4298.39 MiB
llm_load_tensors:        CPU buffer size =  1000.00 MiB
.......................................................
llama_new_context_with_model: n_ctx      = 8192
llama_new_context_with_model: n_batch    = 2048
llama_new_context_with_model: n_ubatch   = 512
llama_new_context_with_model: flash_attn = 0
llama_new_context_with_model: freq_base  = 10000.0
llama_new_context_with_model: freq_scale = 1
ggml_metal_init: allocating
ggml_metal_init: found device: Apple M3 Max
ggml_metal_init: picking default device: Apple M3 Max
ggml_metal_init: using embedded metal library
ggml_metal_init: GPU name:   Apple M3 Max
ggml_metal_init: GPU family: MTLGPUFamilyApple9  (1009)
ggml_metal_init: GPU family: MTLGPUFamilyCommon3 (3003)
ggml_metal_init: GPU family: MTLGPUFamilyMetal3  (5001)
ggml_metal_init: simdgroup reduction support   = true
ggml_metal_init: simdgroup matrix mul. support = true
ggml_metal_init: hasUnifiedMemory              = true
ggml_metal_init: recommendedMaxWorkingSetSize  = 42949.67 MB
llama_kv_cache_init:      Metal KV buffer size =  1536.00 MiB
llama_new_context_with_model: KV self size  = 1536.00 MiB, K (f16):  768.00 MiB, V (f16):  768.00 MiB
llama_new_context_with_model:        CPU  output buffer size =     0.98 MiB
llama_new_context_with_model:      Metal compute buffer size =   288.00 MiB
llama_new_context_with_model:        CPU compute buffer size =   500.00 MiB
llama_new_context_with_model: graph nodes  = 774
llama_new_context_with_model: graph splits = 339

system_info: n_threads = 12 (n_threads_batch = 12) / 16 | AVX = 0 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 0 | NEON = 1 | SVE = 0 | ARM_FMA = 1 | F16C = 0 | FP16_VA = 1 | RISCV_VECT = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 0 | SSSE3 = 0 | VSX = 0 | MATMUL_INT8 = 1 | LLAMAFILE = 1 | 
compute_imatrix: tokenizing the input ..
compute_imatrix: tokenization took 0.001 ms
compute_imatrix: you need at least 16384 tokens for a context of 8192 tokens
compute_imatrix: the data file you provided tokenizes to only 1 tokens

This is rather silly, so I've written a quick "please consider" pull request to indicate the desired behavior.

I believe this simple change is all that is needed because the prompt is pre-filled from arg.cpp, but not yet padded with control tokens until compute_imatrix calls the tokenize downstream

@robbiemu
Copy link
Contributor Author

robbiemu commented Nov 29, 2024

mi PR is obviously not effecting llama-serverso that fail probably isn't related. But I notice Im a couple of commits behind, let me know if you want me to catch up to latest @slaren (noticed you triggered the CI on this)

@ggerganov ggerganov merged commit 3a8e9af into ggerganov:master Nov 29, 2024
50 of 54 checks passed
arthw pushed a commit to arthw/llama.cpp that referenced this pull request Dec 20, 2024
* imatrix-combine-only idea

* ensured that behavior consistent with log
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants