Skip to content

7B model returning complete non-sense #474

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
Lesaloon opened this issue Mar 24, 2023 · 2 comments
Closed

7B model returning complete non-sense #474

Lesaloon opened this issue Mar 24, 2023 · 2 comments

Comments

@Lesaloon
Copy link

i followed a YouTube video to build the program https://www.youtube.com/watch?v=coIj2CU5LMU&t=186s. it itself follows the issue #103

Expected Behavior

As a test I ran the ./chat.sh in git bash, it ran but when I said the AI "hello" I expected hello back.

Current Behavior

it responded with

‼ ▼→▬▬▲↨‼↑♥♠♦"♥ ☻ Ôüç ∟ Ôüç ↔¶
‼∟ Ôüç ♥
►♠
▼!↕☻    ▼ ↓     $▼∟▼↕♣↔"‼↔♥
☺       ►        ↔ Ôüç   #↑"▼↑♠$$▬☺☻

Environment and Context

Please provide detailed information about your computer setup. This is important in case the issue is not reproducible except for under certain specific conditions.

I’m running a i7-13 th gen with 32 go of ram and a 3060.

windows 11 home

git bash to run the commands and cmake to compile

Python 3.10.10
cmake 3.26.1
g++.exe (MinGW.org GCC-6.3.0-1) 6.3.0

Failure Logs

$ ./chat.sh
main: seed = 1679687646
llama_model_load: loading model from './models/7B/ggml-model-f16.bin' - please wait ...
llama_model_load: n_vocab = 32000
llama_model_load: n_ctx   = 512
llama_model_load: n_embd  = 4096
llama_model_load: n_mult  = 256
llama_model_load: n_head  = 32
llama_model_load: n_layer = 32
llama_model_load: n_rot   = 128
llama_model_load: f16     = 1
llama_model_load: n_ff    = 11008
llama_model_load: n_parts = 1
llama_model_load: ggml ctx size = 13365.09 MB
llama_model_load: memory_size =   512.00 MB, n_mem = 16384
llama_model_load: loading model part 1/1 from './models/7B/ggml-model-f16.bin'
llama_model_load:  done
llama_model_load: model size =     0.00 MB / num tensors = 0

system_info: n_threads = 4 / 24 | AVX = 1 | AVX2 = 1 | AVX512 = 0 | FMA = 0 | NEON = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 0 | VSX = 0 |

main: prompt: ' Transcript of a dialog, where the User interacts with an Assistant named Bob. Bob is helpful, kind, honest, good at writing, and never fails to answer the User's requests immediately and with precision.

User: Hello, Bob.
Bob: Hello. How may I help you today?
User: Please tell me the largest city in Europe.
Bob: Sure. The largest city in Europe is Moscow, the capital of Russia.
User:'
main: number of tokens in prompt = 99
     1 -> ''
  4103 -> ' Trans'
   924 -> 'cript'
   310 -> ' of'
   263 -> ' a'
  7928 -> ' dialog'
 29892 -> ','
   988 -> ' where'
   278 -> ' the'
  4911 -> ' User'
 16254 -> ' interact'
 29879 -> 's'
   411 -> ' with'
   385 -> ' an'
  4007 -> ' Ass'
 22137 -> 'istant'
  4257 -> ' named'
  7991 -> ' Bob'
 29889 -> '.'
  7991 -> ' Bob'
   338 -> ' is'
  8444 -> ' helpful'
 29892 -> ','
  2924 -> ' kind'
 29892 -> ','
 15993 -> ' honest'
 29892 -> ','
  1781 -> ' good'
   472 -> ' at'
  5007 -> ' writing'
 29892 -> ','
   322 -> ' and'
  2360 -> ' never'
  8465 -> ' fails'
   304 -> ' to'
  1234 -> ' answer'
   278 -> ' the'
  4911 -> ' User'
 29915 -> '''
 29879 -> 's'
  7274 -> ' requests'
  7389 -> ' immediately'
   322 -> ' and'
   411 -> ' with'
 16716 -> ' precision'
 29889 -> '.'
    13 -> '
'
    13 -> '
'
  2659 -> 'User'
 29901 -> ':'
 15043 -> ' Hello'
 29892 -> ','
  7991 -> ' Bob'
 29889 -> '.'
    13 -> '
'
 29362 -> 'Bob'
 29901 -> ':'
 15043 -> ' Hello'
 29889 -> '.'
  1128 -> ' How'
  1122 -> ' may'
   306 -> ' I'
  1371 -> ' help'
   366 -> ' you'
  9826 -> ' today'
 29973 -> '?'
    13 -> '
'
  2659 -> 'User'
 29901 -> ':'
  3529 -> ' Please'
  2649 -> ' tell'
   592 -> ' me'
   278 -> ' the'
 10150 -> ' largest'
  4272 -> ' city'
   297 -> ' in'
  4092 -> ' Europe'
 29889 -> '.'
    13 -> '
'
 29362 -> 'Bob'
 29901 -> ':'
 18585 -> ' Sure'
 29889 -> '.'
   450 -> ' The'
 10150 -> ' largest'
  4272 -> ' city'
   297 -> ' in'
  4092 -> ' Europe'
   338 -> ' is'
 25820 -> ' Moscow'
 29892 -> ','
   278 -> ' the'
  7483 -> ' capital'
   310 -> ' of'
 12710 -> ' Russia'
 29889 -> '.'
    13 -> '
'
  2659 -> 'User'
 29901 -> ':'

main: interactive mode on.
Reverse prompt: 'User:'
sampling parameters: temp = 0.800000, top_k = 40, top_p = 0.950000, repeat_last_n = 64, repeat_penalty = 1.000000


== Running in interactive mode. ==
 - Press Ctrl+C to interject at any time.
 - Press Return to return control to LLaMa.
 - If you want to submit another line, end your input in '\'.

 Transcript of a dialog, where the User interacts with an Assistant named Bob. Bob is helpful, kind, honest, good at writing, and never fails to answer the User's requests immediately and with precision.

User: Hello, Bob.
Bob: Hello. How may I help you today?
User: Please tell me the largest city in Europe.
Bob: Sure. The largest city in Europe is Moscow, the capital of Russia.
User:hello
‼ ▼→▬▬▲↨‼↑♥♠♦"♥ ☻ Ôüç ∟ Ôüç ↔¶
‼∟ Ôüç ♥
►♠
▼!↕☻    ▼ ↓     $▼∟▼↕♣↔"‼↔♥
☺       ►        ↔ Ôüç   #↑"▼↑♠$$▬☺☻
@Lesaloon Lesaloon changed the title 7B model returning complette non sence 7B model returning complete non-sense Mar 24, 2023
@prusnak
Copy link
Collaborator

prusnak commented Mar 24, 2023

@Lesaloon
Copy link
Author

after 10h of download i forgot to validate my model...

my model was corrupted.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants