Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Misc. bug: Docker Image llama-quantize Segmentation fault #11196

Open
aria3ppp opened this issue Jan 11, 2025 · 3 comments
Open

Misc. bug: Docker Image llama-quantize Segmentation fault #11196

aria3ppp opened this issue Jan 11, 2025 · 3 comments

Comments

@aria3ppp
Copy link

Name and Version

root@f7545b6b4f65:/app# ./llama-cli --version
load_backend: loaded CPU backend from ./libggml-cpu-alderlake.so
version: 4460 (ba8a1f9)
built with cc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0 for x86_64-linux-gnu

Operating systems

Linux, Other? (Please let us know in description)

Which llama.cpp modules do you know to be affected?

llama-quantize

Command line

❯ docker run --rm -it \                                                                                                                                          
  -v ./models:/models \
  ghcr.io/ggerganov/llama.cpp:full \
  --quantize /models/BAAI/bge-small-en-v1.5/bge-small-en-v1.5-f32.gguf /models/BAAI/bge-small-en-v1.5/bge-small-en-v1.5-Q4_K_M.gguf Q4_K_M

Problem description & steps to reproduce

just try to quantize a model and you'll get the segfault

❯ docker run --rm -it \                                    
  -v ./models:/models \
  ghcr.io/ggerganov/llama.cpp:full \
  --quantize /models/BAAI/bge-small-en-v1.5/bge-small-en-v1.5-f32.gguf /models/BAAI/bge-small-en-v1.5/bge-small-en-v1.5-Q4_K_M.gguf Q4_K_M
main: build = 4460 (ba8a1f9c)
main: built with cc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0 for x86_64-linux-gnu
main: quantizing '/models/BAAI/bge-small-en-v1.5/bge-small-en-v1.5-f32.gguf' to '/models/BAAI/bge-small-en-v1.5/bge-small-en-v1.5-Q4_K_M.gguf' as Q4_K_M
llama_model_loader: loaded meta data with 30 key-value pairs and 197 tensors from /models/BAAI/bge-small-en-v1.5/bge-small-en-v1.5-f32.gguf (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = bert
llama_model_loader: - kv   1:                               general.type str              = model
llama_model_loader: - kv   2:                               general.name str              = Bge Small En v1.5
llama_model_loader: - kv   3:                            general.version str              = v1.5
llama_model_loader: - kv   4:                           general.finetune str              = en
llama_model_loader: - kv   5:                           general.basename str              = bge
llama_model_loader: - kv   6:                         general.size_label str              = small
llama_model_loader: - kv   7:                            general.license str              = mit
llama_model_loader: - kv   8:                               general.tags arr[str,5]       = ["sentence-transformers", "feature-ex...
llama_model_loader: - kv   9:                          general.languages arr[str,1]       = ["en"]
llama_model_loader: - kv  10:                           bert.block_count u32              = 12
llama_model_loader: - kv  11:                        bert.context_length u32              = 512
llama_model_loader: - kv  12:                      bert.embedding_length u32              = 384
llama_model_loader: - kv  13:                   bert.feed_forward_length u32              = 1536
llama_model_loader: - kv  14:                  bert.attention.head_count u32              = 12
llama_model_loader: - kv  15:          bert.attention.layer_norm_epsilon f32              = 0.000000
llama_model_loader: - kv  16:                          general.file_type u32              = 0
llama_model_loader: - kv  17:                      bert.attention.causal bool             = false
llama_model_loader: - kv  18:                          bert.pooling_type u32              = 2
llama_model_loader: - kv  19:            tokenizer.ggml.token_type_count u32              = 2
llama_model_loader: - kv  20:                       tokenizer.ggml.model str              = bert
llama_model_loader: - kv  21:                         tokenizer.ggml.pre str              = jina-v2-en
llama_model_loader: - kv  22:                      tokenizer.ggml.tokens arr[str,30522]   = ["[PAD]", "[unused0]", "[unused1]", "...
llama_model_loader: - kv  23:                  tokenizer.ggml.token_type arr[i32,30522]   = [3, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  24:            tokenizer.ggml.unknown_token_id u32              = 100
llama_model_loader: - kv  25:          tokenizer.ggml.seperator_token_id u32              = 102
llama_model_loader: - kv  26:            tokenizer.ggml.padding_token_id u32              = 0
llama_model_loader: - kv  27:                tokenizer.ggml.cls_token_id u32              = 101
llama_model_loader: - kv  28:               tokenizer.ggml.mask_token_id u32              = 103
llama_model_loader: - kv  29:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:  197 tensors
Segmentation fault (core dumped)

First Bad Commit

No response

Relevant log output

No response

@JohannesGaessler
Copy link
Collaborator

Was b4435 / 017cc5f still working correctly?

@aria3ppp
Copy link
Author

aria3ppp commented Jan 12, 2025

Was b4435 / 017cc5f still working correctly?

hey johannes
do you mean b4431? i could not find b4435

❯ docker run --rm -it \
  -v ./models:/models \
  ghcr.io/ggerganov/llama.cpp:full-b4435 \
  --quantize /models/BAAI/bge-small-en-v1.5/bge-small-en-v1.5-f32.gguf /models/BAAI/bge-small-en-v1.5/bge-small-en-v1.5-Q4_K_M.gguf Q4_K_M
Unable to find image 'ghcr.io/ggerganov/llama.cpp:full-b4435' locally
docker: Error response from daemon: manifest unknown.
See 'docker run --help'.
                                                                                                                          6s 08:57:34
❯ docker run --rm -it \
  -v ./models:/models \
  ghcr.io/ggerganov/llama.cpp:full-b4434 \
  --quantize /models/BAAI/bge-small-en-v1.5/bge-small-en-v1.5-f32.gguf /models/BAAI/bge-small-en-v1.5/bge-small-en-v1.5-Q4_K_M.gguf Q4_K_M
Unable to find image 'ghcr.io/ggerganov/llama.cpp:full-b4434' locally
docker: Error response from daemon: manifest unknown.
See 'docker run --help'.
                                                                                                                          5s 08:57:48
❯ docker run --rm -it \
  -v ./models:/models \
  ghcr.io/ggerganov/llama.cpp:full-b4433 \
  --quantize /models/BAAI/bge-small-en-v1.5/bge-small-en-v1.5-f32.gguf /models/BAAI/bge-small-en-v1.5/bge-small-en-v1.5-Q4_K_M.gguf Q4_K_M
Unable to find image 'ghcr.io/ggerganov/llama.cpp:full-b4433' locally
docker: Error response from daemon: manifest unknown.
See 'docker run --help'.
                                                                                                                          6s 08:57:58
❯ docker run --rm -it \
  -v ./models:/models \
  ghcr.io/ggerganov/llama.cpp:full-b4432 \
  --quantize /models/BAAI/bge-small-en-v1.5/bge-small-en-v1.5-f32.gguf /models/BAAI/bge-small-en-v1.5/bge-small-en-v1.5-Q4_K_M.gguf Q4_K_M
Unable to find image 'ghcr.io/ggerganov/llama.cpp:full-b4432' locally
docker: Error response from daemon: manifest unknown.
See 'docker run --help'.
                                                                                                                          6s 08:58:08
❯ docker run --rm -it \
  -v ./models:/models \
  ghcr.io/ggerganov/llama.cpp:full-b4431 \
  --quantize /models/BAAI/bge-small-en-v1.5/bge-small-en-v1.5-f32.gguf /models/BAAI/bge-small-en-v1.5/bge-small-en-v1.5-Q4_K_M.gguf Q4_K_M
Unable to find image 'ghcr.io/ggerganov/llama.cpp:full-b4431' locally
full-b4431: Pulling from ggerganov/llama.cpp
6414378b6477: Already exists 
79d927991872: Pull complete 
a3a25759bec8: Pull complete 
20d3c56b53dd: Pull complete 
4f4fb700ef54: Pull complete 
f41f15e9e25c: Pull complete 
Digest: sha256:095e1c8579e6bd70605755454920b4856c5c203b3c3fe9b0f5f5a4f0e747e5fd
Status: Downloaded newer image for ghcr.io/ggerganov/llama.cpp:full-b4431
main: build = 4431 (dc7cef9f)
main: built with cc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0 for x86_64-linux-gnu
main: quantizing '/models/BAAI/bge-small-en-v1.5/bge-small-en-v1.5-f32.gguf' to '/models/BAAI/bge-small-en-v1.5/bge-small-en-v1.5-Q4_K_M.gguf' as Q4_K_M
llama_model_loader: loaded meta data with 30 key-value pairs and 197 tensors from /models/BAAI/bge-small-en-v1.5/bge-small-en-v1.5-f32.gguf (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = bert
llama_model_loader: - kv   1:                               general.type str              = model
llama_model_loader: - kv   2:                               general.name str              = Bge Small En v1.5
llama_model_loader: - kv   3:                            general.version str              = v1.5
llama_model_loader: - kv   4:                           general.finetune str              = en
llama_model_loader: - kv   5:                           general.basename str              = bge
llama_model_loader: - kv   6:                         general.size_label str              = small
llama_model_loader: - kv   7:                            general.license str              = mit
llama_model_loader: - kv   8:                               general.tags arr[str,5]       = ["sentence-transformers", "feature-ex...
llama_model_loader: - kv   9:                          general.languages arr[str,1]       = ["en"]
llama_model_loader: - kv  10:                           bert.block_count u32              = 12
llama_model_loader: - kv  11:                        bert.context_length u32              = 512
llama_model_loader: - kv  12:                      bert.embedding_length u32              = 384
llama_model_loader: - kv  13:                   bert.feed_forward_length u32              = 1536
llama_model_loader: - kv  14:                  bert.attention.head_count u32              = 12
llama_model_loader: - kv  15:          bert.attention.layer_norm_epsilon f32              = 0.000000
llama_model_loader: - kv  16:                          general.file_type u32              = 0
llama_model_loader: - kv  17:                      bert.attention.causal bool             = false
llama_model_loader: - kv  18:                          bert.pooling_type u32              = 2
llama_model_loader: - kv  19:            tokenizer.ggml.token_type_count u32              = 2
llama_model_loader: - kv  20:                       tokenizer.ggml.model str              = bert
llama_model_loader: - kv  21:                         tokenizer.ggml.pre str              = jina-v2-en
llama_model_loader: - kv  22:                      tokenizer.ggml.tokens arr[str,30522]   = ["[PAD]", "[unused0]", "[unused1]", "...
llama_model_loader: - kv  23:                  tokenizer.ggml.token_type arr[i32,30522]   = [3, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  24:            tokenizer.ggml.unknown_token_id u32              = 100
llama_model_loader: - kv  25:          tokenizer.ggml.seperator_token_id u32              = 102
llama_model_loader: - kv  26:            tokenizer.ggml.padding_token_id u32              = 0
llama_model_loader: - kv  27:                tokenizer.ggml.cls_token_id u32              = 101
llama_model_loader: - kv  28:               tokenizer.ggml.mask_token_id u32              = 103
llama_model_loader: - kv  29:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:  197 tensors
Segmentation fault (core dumped)

❯ docker run --rm -it \
  -v ./models:/models \
  ghcr.io/ggerganov/llama.cpp:full-b4436 \
  --quantize /models/BAAI/bge-small-en-v1.5/bge-small-en-v1.5-f32.gguf /models/BAAI/bge-small-en-v1.5/bge-small-en-v1.5-Q4_K_M.gguf Q4_K_M
main: build = 4436 (53ff6b9b)
main: built with cc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0 for x86_64-linux-gnu
main: quantizing '/models/BAAI/bge-small-en-v1.5/bge-small-en-v1.5-f32.gguf' to '/models/BAAI/bge-small-en-v1.5/bge-small-en-v1.5-Q4_K_M.gguf' as Q4_K_M
llama_model_loader: loaded meta data with 30 key-value pairs and 197 tensors from /models/BAAI/bge-small-en-v1.5/bge-small-en-v1.5-f32.gguf (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = bert
llama_model_loader: - kv   1:                               general.type str              = model
llama_model_loader: - kv   2:                               general.name str              = Bge Small En v1.5
llama_model_loader: - kv   3:                            general.version str              = v1.5
llama_model_loader: - kv   4:                           general.finetune str              = en
llama_model_loader: - kv   5:                           general.basename str              = bge
llama_model_loader: - kv   6:                         general.size_label str              = small
llama_model_loader: - kv   7:                            general.license str              = mit
llama_model_loader: - kv   8:                               general.tags arr[str,5]       = ["sentence-transformers", "feature-ex...
llama_model_loader: - kv   9:                          general.languages arr[str,1]       = ["en"]
llama_model_loader: - kv  10:                           bert.block_count u32              = 12
llama_model_loader: - kv  11:                        bert.context_length u32              = 512
llama_model_loader: - kv  12:                      bert.embedding_length u32              = 384
llama_model_loader: - kv  13:                   bert.feed_forward_length u32              = 1536
llama_model_loader: - kv  14:                  bert.attention.head_count u32              = 12
llama_model_loader: - kv  15:          bert.attention.layer_norm_epsilon f32              = 0.000000
llama_model_loader: - kv  16:                          general.file_type u32              = 0
llama_model_loader: - kv  17:                      bert.attention.causal bool             = false
llama_model_loader: - kv  18:                          bert.pooling_type u32              = 2
llama_model_loader: - kv  19:            tokenizer.ggml.token_type_count u32              = 2
llama_model_loader: - kv  20:                       tokenizer.ggml.model str              = bert
llama_model_loader: - kv  21:                         tokenizer.ggml.pre str              = jina-v2-en
llama_model_loader: - kv  22:                      tokenizer.ggml.tokens arr[str,30522]   = ["[PAD]", "[unused0]", "[unused1]", "...
llama_model_loader: - kv  23:                  tokenizer.ggml.token_type arr[i32,30522]   = [3, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  24:            tokenizer.ggml.unknown_token_id u32              = 100
llama_model_loader: - kv  25:          tokenizer.ggml.seperator_token_id u32              = 102
llama_model_loader: - kv  26:            tokenizer.ggml.padding_token_id u32              = 0
llama_model_loader: - kv  27:                tokenizer.ggml.cls_token_id u32              = 101
llama_model_loader: - kv  28:               tokenizer.ggml.mask_token_id u32              = 103
llama_model_loader: - kv  29:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:  197 tensors
Segmentation fault (core dumped)

@narc1ssus1
Copy link

narc1ssus1 commented Jan 23, 2025

I compiled a docker image myself, it ran through in aarch64,but i get the segfault in b4524.
when i run b4524,i try to find the error ,then i find this

 gdb --args ./llama-server -m /models/qwen2.5-0.5b-instruct-q4_k_m.gguf
GNU gdb (Ubuntu 12.1-0ubuntu1~22.04.2) 12.1
Copyright (C) 2022 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
Type "show copying" and "show warranty" for details.
This GDB was configured as "aarch64-linux-gnu".
Type "show configuration" for configuration details.
For bug reporting instructions, please see:
<https://www.gnu.org/software/gdb/bugs/>.
Find the GDB manual and other documentation resources online at:
    <http://www.gnu.org/software/gdb/documentation/>.

For help, type "help".
Type "apropos word" to search for commands related to "word"...
Reading symbols from ./llama-server...
(No debugging symbols found in ./llama-server)
(gdb) run
Starting program: /app/llama-server -m /models/qwen2.5-0.5b-instruct-q4_k_m.gguf
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/aarch64-linux-gnu/libthread_db.so.1".
[New Thread 0xfffff672eea0 (LWP 689)]
build: 4534 (955a6c2d) with cc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0 for aarch64-linux-gnu
system info: n_threads = 8, n_threads_batch = 8, total_threads = 8

system_info: n_threads = 8 (n_threads_batch = 8) / 8 | 

[New Thread 0xfffff5f1eea0 (LWP 690)]
main: HTTP server is listening, hostname: 0.0.0.0, port: 8080, http threads: 7
main: loading model
srv    load_model: loading model '/models/qwen2.5-0.5b-instruct-q4_k_m.gguf'
[New Thread 0xfffff570eea0 (LWP 691)]
[New Thread 0xfffff4a5aea0 (LWP 692)]
[New Thread 0xffffefffeea0 (LWP 693)]
[New Thread 0xffffef7eeea0 (LWP 694)]
[New Thread 0xffffeefdeea0 (LWP 695)]
[New Thread 0xffffee7ceea0 (LWP 696)]
[New Thread 0xffffedfbeea0 (LWP 697)]
llama_model_loader: loaded meta data with 26 key-value pairs and 291 tensors from /models/qwen2.5-0.5b-instruct-q4_k_m.gguf (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = qwen2
llama_model_loader: - kv   1:                               general.type str              = model
llama_model_loader: - kv   2:                               general.name str              = qwen2.5-0.5b-instruct
llama_model_loader: - kv   3:                            general.version str              = v0.1
llama_model_loader: - kv   4:                           general.finetune str              = qwen2.5-0.5b-instruct
llama_model_loader: - kv   5:                         general.size_label str              = 630M
llama_model_loader: - kv   6:                          qwen2.block_count u32              = 24
llama_model_loader: - kv   7:                       qwen2.context_length u32              = 32768
llama_model_loader: - kv   8:                     qwen2.embedding_length u32              = 896
llama_model_loader: - kv   9:                  qwen2.feed_forward_length u32              = 4864
llama_model_loader: - kv  10:                 qwen2.attention.head_count u32              = 14
llama_model_loader: - kv  11:              qwen2.attention.head_count_kv u32              = 2
llama_model_loader: - kv  12:                       qwen2.rope.freq_base f32              = 1000000.000000
llama_model_loader: - kv  13:     qwen2.attention.layer_norm_rms_epsilon f32              = 0.000001
llama_model_loader: - kv  14:                          general.file_type u32              = 15
llama_model_loader: - kv  15:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  16:                         tokenizer.ggml.pre str              = qwen2
llama_model_loader: - kv  17:                      tokenizer.ggml.tokens arr[str,151936]  = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv  18:                  tokenizer.ggml.token_type arr[i32,151936]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  19:                      tokenizer.ggml.merges arr[str,151387]  = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",...
llama_model_loader: - kv  20:                tokenizer.ggml.eos_token_id u32              = 151645
llama_model_loader: - kv  21:            tokenizer.ggml.padding_token_id u32              = 151643
llama_model_loader: - kv  22:                tokenizer.ggml.bos_token_id u32              = 151643
llama_model_loader: - kv  23:               tokenizer.ggml.add_bos_token bool             = false
llama_model_loader: - kv  24:                    tokenizer.chat_template str              = {%- if tools %}\n    {{- '<|im_start|>...
llama_model_loader: - kv  25:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:  121 tensors
llama_model_loader: - type q5_0:  133 tensors
llama_model_loader: - type q8_0:   13 tensors
llama_model_loader: - type q4_K:   12 tensors
llama_model_loader: - type q6_K:   12 tensors
print_info: file format = GGUF V3 (latest)
print_info: file type   = Q4_K - Medium
print_info: file size   = 462.96 MiB (6.16 BPW) 
load: special tokens cache size = 22
load: token to piece cache size = 0.9310 MB
print_info: arch             = qwen2
print_info: vocab_only       = 0
print_info: n_ctx_train      = 32768
print_info: n_embd           = 896
print_info: n_layer          = 24
print_info: n_head           = 14
print_info: n_head_kv        = 2
print_info: n_rot            = 64
print_info: n_swa            = 0
print_info: n_embd_head_k    = 64
print_info: n_embd_head_v    = 64
print_info: n_gqa            = 7
print_info: n_embd_k_gqa     = 128
print_info: n_embd_v_gqa     = 128
print_info: f_norm_eps       = 0.0e+00
print_info: f_norm_rms_eps   = 1.0e-06
print_info: f_clamp_kqv      = 0.0e+00
print_info: f_max_alibi_bias = 0.0e+00
print_info: f_logit_scale    = 0.0e+00
print_info: n_ff             = 4864
print_info: n_expert         = 0
print_info: n_expert_used    = 0
print_info: causal attn      = 1
print_info: pooling type     = 0
print_info: rope type        = 2
print_info: rope scaling     = linear
print_info: freq_base_train  = 1000000.0
print_info: freq_scale_train = 1
print_info: n_ctx_orig_yarn  = 32768
print_info: rope_finetuned   = unknown
print_info: ssm_d_conv       = 0
print_info: ssm_d_inner      = 0
print_info: ssm_d_state      = 0
print_info: ssm_dt_rank      = 0
print_info: ssm_dt_b_c_rms   = 0
print_info: model type       = 1B
print_info: model params     = 630.17 M
print_info: general.name     = qwen2.5-0.5b-instruct
print_info: vocab type       = BPE
print_info: n_vocab          = 151936
print_info: n_merges         = 151387
print_info: BOS token        = 151643 '<|endoftext|>'
print_info: EOS token        = 151645 '<|im_end|>'
print_info: EOT token        = 151645 '<|im_end|>'
print_info: PAD token        = 151643 '<|endoftext|>'
print_info: LF token         = 148848 'ÄĬ'
print_info: FIM PRE token    = 151659 '<|fim_prefix|>'
print_info: FIM SUF token    = 151661 '<|fim_suffix|>'
print_info: FIM MID token    = 151660 '<|fim_middle|>'
print_info: FIM PAD token    = 151662 '<|fim_pad|>'
print_info: FIM REP token    = 151663 '<|repo_name|>'
print_info: FIM SEP token    = 151664 '<|file_sep|>'
print_info: EOG token        = 151643 '<|endoftext|>'
print_info: EOG token        = 151645 '<|im_end|>'
print_info: EOG token        = 151662 '<|fim_pad|>'
print_info: EOG token        = 151663 '<|repo_name|>'
print_info: EOG token        = 151664 '<|file_sep|>'
print_info: max token length = 256

Thread 1 "llama-server" received signal SIGSEGV, Segmentation fault.
0x0000fffff7ce3b30 in ggml_backend_dev_backend_reg () from libggml-base.so




(gdb) break ggml_backend_dev_backend_reg
Breakpoint 1 at 0xfffff7ce3b30
(gdb) bt
#0  0x0000fffff7ce3b30 in ggml_backend_dev_backend_reg () from libggml-base.so
#1  0x0000fffff7e559e0 in llama_model::load_tensors(llama_model_loader&) () from libllama.so
#2  0x0000fffff7df3804 in llama_model_load(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::vector<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::allocator<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > >&, llama_model&, llama_model_params&) () from libllama.so
#3  0x0000fffff7df6180 in llama_model_load_from_file_impl(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::vector<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::allocator<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > >&, llama_model_params) () from libllama.so
#4  0x0000fffff7df63f8 in llama_model_load_from_file () from libllama.so
#5  0x0000aaaaaab9dd10 in common_init_from_params(common_params&) ()
#6  0x0000aaaaaab21180 in server_context::load_model(common_params const&) ()
#7  0x0000aaaaaaac1bf8 in main ()
(gdb) 

root@ab218af66379:/app# dmesg | tail -n 20
[ 2091.821533] docker0: port 1(veth2eaa9af) entered disabled state
[ 2122.338290] docker0: port 1(vetha342967) entered blocking state
[ 2122.338294] docker0: port 1(vetha342967) entered disabled state
[ 2122.338339] device vetha342967 entered promiscuous mode
[ 2122.501629] eth0: renamed from veth37392a3
[ 2122.556755] IPv6: ADDRCONF(NETDEV_CHANGE): vetha342967: link becomes ready
[ 2122.556792] docker0: port 1(vetha342967) entered blocking state
[ 2122.556795] docker0: port 1(vetha342967) entered forwarding state
[ 2422.844454] docker0: port 1(vetha342967) entered disabled state
[ 2422.844509] veth37392a3: renamed from eth0
[ 2422.930133] docker0: port 1(vetha342967) entered disabled state
[ 2422.930505] device vetha342967 left promiscuous mode
[ 2422.930519] docker0: port 1(vetha342967) entered disabled state
[ 2465.655247] docker0: port 1(vethdf93d3b) entered blocking state
[ 2465.655252] docker0: port 1(vethdf93d3b) entered disabled state
[ 2465.655293] device vethdf93d3b entered promiscuous mode
[ 2465.818686] eth0: renamed from vetha7f4bfc
[ 2465.890147] IPv6: ADDRCONF(NETDEV_CHANGE): vethdf93d3b: link becomes ready
[ 2465.890186] docker0: port 1(vethdf93d3b) entered blocking state
[ 2465.890188] docker0: port 1(vethdf93d3b) entered forwarding state

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants