Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

SEHException on Tokenize model. #791

Closed
cm4ker opened this issue Jun 13, 2024 · 5 comments
Closed

SEHException on Tokenize model. #791

cm4ker opened this issue Jun 13, 2024 · 5 comments

Comments

@cm4ker
Copy link

cm4ker commented Jun 13, 2024

Description

Hello! Thanks for the great project!

I faced with issue then I try tokenize text with new line chartacter '\n'.

Have no idea how to debug this (have no expirience with debugging native code).
Maybe this issue need route to llama.cpp project?

I get this then I try make a pipeline from PDF document -> textual representation-> (error). Maybe where is any workarund for fix this?

Thanks!

There is a console log:

llama_model_loader: loaded meta data with 24 key-value pairs and 389 tensors from C:\Users\user\.cache\hub\model--vonjack--bge-m3-gguf\snapshots\155a6566f70b35945ba2fa8c868bef33d652a512\bge-m3-f16.gguf (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = bert
llama_model_loader: - kv   1:                               general.name str              = bge-m3
llama_model_loader: - kv   2:                           bert.block_count u32              = 24
llama_model_loader: - kv   3:                        bert.context_length u32              = 8194
llama_model_loader: - kv   4:                      bert.embedding_length u32              = 1024
llama_model_loader: - kv   5:                   bert.feed_forward_length u32              = 4096
llama_model_loader: - kv   6:                  bert.attention.head_count u32              = 16
llama_model_loader: - kv   7:          bert.attention.layer_norm_epsilon f32              = 0.000010
llama_model_loader: - kv   8:                          general.file_type u32              = 1
llama_model_loader: - kv   9:                      bert.attention.causal bool             = false
llama_model_loader: - kv  10:                          bert.pooling_type u32              = 2
llama_model_loader: - kv  11:            tokenizer.ggml.token_type_count u32              = 1
llama_model_loader: - kv  12:                       tokenizer.ggml.model str              = llama
llama_model_loader: - kv  13:                         tokenizer.ggml.pre str              = default
llama_model_loader: - kv  14:                      tokenizer.ggml.tokens arr[str,250002]  = ["<s>", "<pad>", "</s>", "<unk>", ","...
llama_model_loader: - kv  15:                      tokenizer.ggml.scores arr[f32,250002]  = [0.000000, 0.000000, 0.000000, 0.0000...
llama_model_loader: - kv  16:                  tokenizer.ggml.token_type arr[i32,250002]  = [3, 2, 3, 2, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  17:                tokenizer.ggml.bos_token_id u32              = 0
llama_model_loader: - kv  18:                tokenizer.ggml.eos_token_id u32              = 2
llama_model_loader: - kv  19:            tokenizer.ggml.unknown_token_id u32              = 3
llama_model_loader: - kv  20:          tokenizer.ggml.seperator_token_id u32              = 2
llama_model_loader: - kv  21:            tokenizer.ggml.padding_token_id u32              = 1
llama_model_loader: - kv  22:                tokenizer.ggml.cls_token_id u32              = 0
llama_model_loader: - kv  23:               tokenizer.ggml.mask_token_id u32              = 250001
llama_model_loader: - type  f32:  243 tensors
llama_model_loader: - type  f16:  146 tensors
llm_load_vocab: SPM vocabulary, but newline token not found: invalid unordered_map<K, T> key! Using special_pad_id instead.llm_load_vocab: mismatch in special tokens definition ( 51189/250002 vs 5/250002 ).
llm_load_print_meta: format           = GGUF V3 (latest)
llm_load_print_meta: arch             = bert
llm_load_print_meta: vocab type       = SPM
llm_load_print_meta: n_vocab          = 250002
llm_load_print_meta: n_merges         = 0
llm_load_print_meta: n_ctx_train      = 8194
llm_load_print_meta: n_embd           = 1024
llm_load_print_meta: n_head           = 16
llm_load_print_meta: n_head_kv        = 16
llm_load_print_meta: n_layer          = 24
llm_load_print_meta: n_rot            = 64
llm_load_print_meta: n_embd_head_k    = 64
llm_load_print_meta: n_embd_head_v    = 64
llm_load_print_meta: n_gqa            = 1
llm_load_print_meta: n_embd_k_gqa     = 1024
llm_load_print_meta: n_embd_v_gqa     = 1024
llm_load_print_meta: f_norm_eps       = 1.0e-05
llm_load_print_meta: f_norm_rms_eps   = 0.0e+00
llm_load_print_meta: f_clamp_kqv      = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale    = 0.0e+00
llm_load_print_meta: n_ff             = 4096
llm_load_print_meta: n_expert         = 0
llm_load_print_meta: n_expert_used    = 0
llm_load_print_meta: causal attn      = 0
llm_load_print_meta: pooling type     = 2
llm_load_print_meta: rope type        = 2
llm_load_print_meta: rope scaling     = linear
llm_load_print_meta: freq_base_train  = 10000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_yarn_orig_ctx  = 8194
llm_load_print_meta: rope_finetuned   = unknown
llm_load_print_meta: ssm_d_conv       = 0
llm_load_print_meta: ssm_d_inner      = 0
llm_load_print_meta: ssm_d_state      = 0
llm_load_print_meta: ssm_dt_rank      = 0
llm_load_print_meta: model type       = 335M
llm_load_print_meta: model ftype      = F16
llm_load_print_meta: model params     = 566.71 M
llm_load_print_meta: model size       = 1.06 GiB (16.01 BPW)
llm_load_print_meta: general.name     = bge-m3
llm_load_print_meta: BOS token        = 0 '<s>'
llm_load_print_meta: EOS token        = 2 '</s>'
llm_load_print_meta: UNK token        = 3 '<unk>'
llm_load_print_meta: SEP token        = 2 '</s>'
llm_load_print_meta: PAD token        = 1 '<pad>'
llm_load_print_meta: CLS token        = 0 '<s>'
llm_load_print_meta: MASK token       = 250001 '<mask>'
llm_load_tensors: ggml ctx size =    0.18 MiB
llm_load_tensors:        CPU buffer size =  1081.52 MiB
........................................................
Success: LLama.Native.LLamaToken[]
System.Runtime.InteropServices.SEHException (0x80004005): External component has thrown an exception.
   at LLama.Native.NativeApi.llama_tokenize(SafeLlamaModelHandle model, Byte* text, Int32 text_len, LLamaToken* tokens, Int32 n_max_tokens, Boolean add_special, Boolean parse_special)
   at LLama.Native.SafeLlamaModelHandle.Tokenize(String text, Boolean add_bos, Boolean special, Encoding encoding)
   at LLama.LLamaWeights.Tokenize(String text, Boolean add_bos, Boolean special, Encoding encoding)
   at SemanticKernelTest.IssueReproduce.Run() in C:\projects\experiments\SemanticKernelTest\IssueReproduce.cs:line 36

Process finished with exit code 0.

using System.Text;
using HuggingfaceHub;
using LLama;
using LLama.Common;

namespace SemanticKernelTest;

public class IssueReproduce
{
    public class ConsoleProgress : IProgress<int>
    {
        public void Report(int value)
        {
            Console.SetCursorPosition(0, Console.CursorTop);
            Console.Write($"{value:D3}%");
        }
    }
    public static async Task Run()
    {
        var modelPath = await HFDownloader.DownloadFileAsync("vonjack/bge-m3-gguf", "bge-m3-f16.gguf",
            progress: new ConsoleProgress());

        try
        {
            var seed = 1337u;
            var parameters = new ModelParams(modelPath)
            {
                Seed = seed,
                Embeddings = true,
                ContextSize = 512,
            };

            using var model = LLamaWeights.LoadFromFile(parameters);

            Console.WriteLine("Success: " + model.Tokenize("test", true, true, Encoding.Default));
            Console.WriteLine("Failed: " + model.Tokenize("test\n", true, true, Encoding.Default));
        }
        catch (Exception ex)
        {
            Console.WriteLine(ex);
        }
        finally
        {
            // uncomment for delete model after test
            // File.Delete(modelPath);
        }
    }
}
@AsakusaRinne
Copy link
Collaborator

What if using model.Tokenize("test", true, false, Encoding.Default))?

@cm4ker
Copy link
Author

cm4ker commented Jun 14, 2024

What if using model.Tokenize("test", true, false, Encoding.Default))?

The same error.

I found this thread: ggerganov/llama.cpp#6007
Think that currently this embedding model not completely supported =\

@AsakusaRinne
Copy link
Collaborator

Maybe it's an upstream issue. Does the problem appear when you use another model?

@cm4ker
Copy link
Author

cm4ker commented Jun 16, 2024

@AsakusaRinne Sorry for delay. I check with this model leliuga/all-MiniLM-L12-v2-GGUF and it works.

@cm4ker
Copy link
Author

cm4ker commented Jun 18, 2024

I think now we can close this. And wait support for this model from llama.cpp. Thanks for help @AsakusaRinne!

@cm4ker cm4ker closed this as completed Jun 18, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants