Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

terminate called after throwing an instance of 'std::runtime_error' #1569

Closed
apcameron opened this issue May 23, 2023 · 3 comments · Fixed by #1599
Closed

terminate called after throwing an instance of 'std::runtime_error' #1569

apcameron opened this issue May 23, 2023 · 3 comments · Fixed by #1599

Comments

@apcameron
Copy link
Contributor

./main -m ./models/7B/ggml-model-q4_0.bin -p "Building a website can be done in 10 simple steps:" -n 512
main: build = 584 (2e6cd4b)
main: seed = 1684832847
ggml_opencl: selecting platform: 'PowerVR'
ggml_opencl: selecting device: 'PowerVR B-Series BXE-4-32'
ggml_opencl: device FP16 support: true
llama.cpp: loading model from ./models/7B/ggml-model-q4_0.bin
terminate called after throwing an instance of 'std::runtime_error'
what(): unexpectedly reached end of file
Aborted (core dumped)

@apcameron
Copy link
Contributor Author

./quantize ./models/7B/ggml-model-f16.bin ./models/7B/ggml-model-q4_0.bin q4_0
After running the command above it seems to work again

@mgroeber9110
Copy link
Contributor

This may be the same error as #1589: std::runtime_error is currently not caught in all places of the code, because exceptions are a mix of std::string and std::exception derived types.

@ibndias
Copy link

ibndias commented May 29, 2023

./quantize ./models/7B/ggml-model-f16.bin ./models/7B/ggml-model-q4_0.bin q4_0 After running the command above it seems to work again

this solved my issue

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants