-
Notifications
You must be signed in to change notification settings - Fork 11.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Android port error #963
Comments
I'm having the same problem. The only method that works for me is to use an outdated 7b llama model. Here's what happens when I try and run anything different:
|
Seems like this could be problem with mmap. Can you see if using |
Wow, I searched for days and you pinned down the problem right away. --no-mmap allowed my newer 7b llama model to run. Thank you. |
I suspect this is the problem: @comex can you confirm is this is an error? Looks like the fd shouldn't be closed since it is owned by the FILE*. |
I can confirm the same issue on android, --no-map is a valid work around. |
Can you check if removing the line with |
Hi, removing the lines Thank you. |
For me is working now removing that line from the code. Now we can close this? |
$ ./llama -m vicuna.bin main: seed = 1681462772 llama.cpp: loading model from vicuna.bin llama_model_load_internal: format = ggjt v1 (latest) llama_model_load_internal: n_vocab = 32001 llama_model_load_internal: n_ctx = 512 llama_model_load_internal: n_embd = 4096 llama_model_load_internal: n_mult = 256 llama_model_load_internal: n_head = 32 llama_model_load_internal: n_layer = 32 llama_model_load_internal: n_rot = 128 llama_model_load_internal: ftype = 2 (mostly Q4_0) llama_model_load_internal: n_ff = 11008 llama_model_load_internal: n_parts = 1 llama_model_load_internal: model size = 7B llama_model_load_internal: ggml ctx size = 59.11 KB llama_model_load_internal: mem required = 5809.33 MB (+ 1026.00 MB per state) afdsan: attempted to close file descriptor 3, expected to be unowned, actually owned by FILE* 0x7e73c0a018 Aborted
Don't know what is happening
The text was updated successfully, but these errors were encountered: