Replies: 1 comment 1 reply
-
Can you double check that the llama.cpp version that you build used the If using cmake this would look something like this: $ cmake -S . -B build -DLLAMA_CURL=ON Or if using make: $ make LLAMA_CURL=1 Also, if I remember correctly you might need the $ sudo apt install libcurl4-openssl-dev |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hi!
It seems like my llama.cpp can't use libcurl in my system.
When I try to pull a model from HF, I get the following:
llama_load_model_from_hf: llama.cpp built without libcurl, downloading from Hugging Face not supported.
I'm on Ubuntu, and have the following modules installed:
libcurl3t64-gnutls
libcurl4t64
libcurl4t64
in particular provides libcurl4(abridged)
Is this the curl used in
common_download_file
incommon.cpp
, or is that looking for something else?Any ideas? Thanks!
Beta Was this translation helpful? Give feedback.
All reactions