-
Notifications
You must be signed in to change notification settings - Fork 203
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ERROR. How to fix ? #67
Comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Running on backend llama.cpp.
Model path is empty.
Use default llama.cpp model path: ./models/llama-2-7b-chat.ggmlv3.q4_0.bin
Model exists in ./models/llama-2-7b-chat.ggmlv3.q4_0.bin.
Traceback (most recent call last):
File "C:\llama2-webui\app.py", line 325, in
main()
File "C:\llama2-webui\app.py", line 56, in main
llama2_wrapper = LLAMA2_WRAPPER(
^^^^^^^^^^^^^^^
File "C:\llama2-webui\llama2_wrapper\model.py", line 99, in init
self.init_model()
File "C:\llama2-webui\llama2_wrapper\model.py", line 103, in init_model
self.model = LLAMA2_WRAPPER.create_llama2_model(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\llama2-webui\llama2_wrapper\model.py", line 125, in create_llama2_model
model = Llama(
^^^^^^
File "C:\Python311\Lib\site-packages\llama_cpp\llama.py", line 323, in init
assert self.model is not None
AssertionError
The text was updated successfully, but these errors were encountered: