Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Issue with one click and manual install, not working out of the box... Ubuntu 24.04 #6344

Open
1 task done
TigerTy9 opened this issue Aug 22, 2024 · 3 comments
Open
1 task done
Labels
bug Something isn't working

Comments

@TigerTy9
Copy link

TigerTy9 commented Aug 22, 2024

Describe the bug

I tried both the manual install and the one click install for Linux. My OS is a fresh install of Ubuntu 24.04. I've previously used this model on Windows 10 with text-generation-webui but am not having an easy time with it on linux. What's going on here? It should work out of the box like it does on Windows one click install. I shouldn't be having these issues with a newly installed Ubuntu OS, I have verified already that I do have the NVIDIA drives for my 1650S installed.

Is there an existing issue for this?

  • I have searched the existing issues

Reproduction

Installing text-generation-webui on a fresh install of Ubuntu 24.04 by both the manual and one click methods.

Screenshot

No response

Logs

Traceback (most recent call last):

File "/home/tiger/Downloads/text-generation-webui/modules/ui_model_menu.py", line 231, in load_model_wrapper

shared.model, shared.tokenizer = load_model(selected_model, loader)

                                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

File "/home/tiger/Downloads/text-generation-webui/modules/models.py", line 93, in load_model

output = load_func_map[loader](model_name)

         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

File "/home/tiger/Downloads/text-generation-webui/modules/models.py", line 278, in llamacpp_loader

model, tokenizer = LlamaCppModel.from_pretrained(model_file)

                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

File "/home/tiger/Downloads/text-generation-webui/modules/llamacpp_model.py", line 85, in from_pretrained

result.model = Llama(**params)

               ^^^^^^^^^^^^^^^

File "/home/tiger/miniconda3/envs/textgen/lib/python3.11/site-packages/llama_cpp_cuda/llama.py", line 371, in init

_LlamaModel(

File "/home/tiger/miniconda3/envs/textgen/lib/python3.11/site-packages/llama_cpp_cuda/_internals.py", line 55, in init

raise ValueError(f"Failed to load model from file: {path_model}")

ValueError: Failed to load model from file: models/nephra_v1.0.Q4_0.gguf

System Info

Ubuntu 24.04 LTS
NVIDIA 1650S

About Info
Hardware Model: Micro-Star International Co., Ltd. MS-7C37
Processor: AMD Ryzen™ 7 3700X × 16
Memory: 64.0 GiB

Software Versions
Gnome Version: 46
Windowing System: X11
Linux Kernel Version: Linux 6.8.0-41-generic
@TigerTy9 TigerTy9 added the bug Something isn't working label Aug 22, 2024
@TigerTy9
Copy link
Author

TigerTy9 commented Sep 4, 2024

Still unsolved.

@norasyeezys
Copy link

Same issue.
Rocky Linux 9.4
NVIDIA RTX 3070 Ti

Tried backtracking to no luck.

I tried mythomax 13b .gguf and same issue both value error and attribute error "no model attribute"
Replicate it the same way. Fresh install of Rocky Linux instead.

@norasyeezys
Copy link

FIgured it out.

https://www.reddit.com/r/Oobabooga/comments/1ecbeic/getting_attributeerror_llamacppmodel_object_has/?chainedPosts=t3_162gsgu

Lower the GPU layer count, it started at 41 and I lowered it to 25. Worked fine.

Reducing context size didn't work though.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants