Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] Another permissions error when installing with docker-compose #2013

Closed
5 of 9 tasks
salimfadhley opened this issue Jul 23, 2024 · 10 comments
Closed
5 of 9 tasks
Assignees
Labels
bug Something isn't working

Comments

@salimfadhley
Copy link

Pre-check

  • I have searched the existing issues and none cover this bug.

Description

This looks similar, but not the same as #1876

As for following the instructions, I've not seen any relevant guide to installing with Docker, hence working a bit blind.

Background: I'm trying to run this on an Asustor NAS, which offers very little ability to customize the environment. Ideally, I'd just like to be able to run this by pasting a docker-compose file into Portainer, and having it work it's magic from there:


sal@halob:/volume1/home/sal/apps/private-gpt $ docker-compose up
[+] Running 3/3
 ✔ Network private-gpt_default          Created                                                                                                                                   0.1s
 ✔ Container private-gpt-ollama-1       Created                                                                                                                                   0.1s
 ✔ Container private-gpt-private-gpt-1  Created                                                                                                                                   0.1s
Attaching to ollama-1, private-gpt-1
ollama-1       | Couldn't find '/root/.ollama/id_ed25519'. Generating new private key.
ollama-1       | Your new public key is:
ollama-1       |
ollama-1       | ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBNQkShAIoUDyyueUTiCHM9/AZfZ+rxnUZgmh+YByBVB
ollama-1       |
ollama-1       | 2024/07/23 23:20:28 routes.go:1096: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_LLM_LIBRARY: OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MAX_VRAM:0 OLLAMA_MODELS:/root/.ollama/models OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://*] OLLAMA_RUNNERS_DIR: OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES:]"
ollama-1       | time=2024-07-23T23:20:28.317Z level=INFO source=images.go:778 msg="total blobs: 0"
ollama-1       | time=2024-07-23T23:20:28.317Z level=INFO source=images.go:785 msg="total unused blobs removed: 0"
ollama-1       | time=2024-07-23T23:20:28.317Z level=INFO source=routes.go:1143 msg="Listening on [::]:11434 (version 0.2.6)"
ollama-1       | time=2024-07-23T23:20:28.318Z level=INFO source=payload.go:30 msg="extracting embedded files" dir=/tmp/ollama1112441504/runners
private-gpt-1  | 23:20:29.406 [INFO    ] private_gpt.settings.settings_loader - Starting application with profiles=['default', 'docker']
ollama-1       | time=2024-07-23T23:20:33.589Z level=INFO source=payload.go:44 msg="Dynamic LLM libraries [cpu_avx cpu_avx2 cuda_v11 rocm_v60102 cpu]"
ollama-1       | time=2024-07-23T23:20:33.589Z level=INFO source=gpu.go:205 msg="looking for compatible GPUs"
ollama-1       | time=2024-07-23T23:20:33.589Z level=WARN source=gpu.go:225 msg="CPU does not have minimum vector extensions, GPU inference disabled" required=avx detected="no vector extensions"
ollama-1       | time=2024-07-23T23:20:33.590Z level=INFO source=types.go:105 msg="inference compute" id=0 library=cpu compute="" driver=0.0 name="" total="31.1 GiB" available="28.1 GiB"
private-gpt-1  | There was a problem when trying to write in your cache folder (/nonexistent/.cache/huggingface/hub). You should set the environment variable TRANSFORMERS_CACHE to a writable directory.
private-gpt-1  | None of PyTorch, TensorFlow >= 2.0, or Flax have been found. Models won't be available and only tokenizers, configuration and file/data utilities can be used.
private-gpt-1  | 23:20:40.419 [INFO    ] private_gpt.components.llm.llm_component - Initializing the LLM in mode=ollama
private-gpt-1  | Traceback (most recent call last):
private-gpt-1  |   File "/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py", line 798, in get
private-gpt-1  |     return self._context[key]
private-gpt-1  |            ~~~~~~~~~~~~~^^^^^
private-gpt-1  | KeyError: <class 'private_gpt.ui.ui.PrivateGptUi'>
private-gpt-1  |
private-gpt-1  | During handling of the above exception, another exception occurred:
private-gpt-1  |
private-gpt-1  | Traceback (most recent call last):
private-gpt-1  |   File "/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py", line 798, in get
private-gpt-1  |     return self._context[key]
private-gpt-1  |            ~~~~~~~~~~~~~^^^^^
private-gpt-1  | KeyError: <class 'private_gpt.server.ingest.ingest_service.IngestService'>
private-gpt-1  |
private-gpt-1  | During handling of the above exception, another exception occurred:
private-gpt-1  |
private-gpt-1  | Traceback (most recent call last):
private-gpt-1  |   File "/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py", line 798, in get
private-gpt-1  |     return self._context[key]
private-gpt-1  |            ~~~~~~~~~~~~~^^^^^
private-gpt-1  | KeyError: <class 'private_gpt.components.vector_store.vector_store_component.VectorStoreComponent'>
private-gpt-1  |
private-gpt-1  | During handling of the above exception, another exception occurred:
private-gpt-1  |
private-gpt-1  | Traceback (most recent call last):
private-gpt-1  |   File "<frozen runpy>", line 198, in _run_module_as_main
private-gpt-1  |   File "<frozen runpy>", line 88, in _run_code
private-gpt-1  |   File "/home/worker/app/private_gpt/__main__.py", line 5, in <module>
private-gpt-1  |     from private_gpt.main import app
private-gpt-1  |   File "/home/worker/app/private_gpt/main.py", line 6, in <module>
private-gpt-1  |     app = create_app(global_injector)
private-gpt-1  |           ^^^^^^^^^^^^^^^^^^^^^^^^^^^
private-gpt-1  |   File "/home/worker/app/private_gpt/launcher.py", line 63, in create_app
private-gpt-1  |     ui = root_injector.get(PrivateGptUi)
private-gpt-1  |          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
private-gpt-1  |   File "/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py", line 91, in wrapper
private-gpt-1  |     return function(*args, **kwargs)
private-gpt-1  |            ^^^^^^^^^^^^^^^^^^^^^^^^^
private-gpt-1  |   File "/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py", line 974, in get
private-gpt-1  |     provider_instance = scope_instance.get(interface, binding.provider)
private-gpt-1  |                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
private-gpt-1  |   File "/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py", line 91, in wrapper
private-gpt-1  |     return function(*args, **kwargs)
private-gpt-1  |            ^^^^^^^^^^^^^^^^^^^^^^^^^
private-gpt-1  |   File "/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py", line 800, in get
private-gpt-1  |     instance = self._get_instance(key, provider, self.injector)
private-gpt-1  |                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
private-gpt-1  |   File "/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py", line 811, in _get_instance
private-gpt-1  |     return provider.get(injector)
private-gpt-1  |            ^^^^^^^^^^^^^^^^^^^^^^
private-gpt-1  |   File "/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py", line 264, in get
private-gpt-1  |     return injector.create_object(self._cls)
private-gpt-1  |            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
private-gpt-1  |   File "/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py", line 998, in create_object
private-gpt-1  |     self.call_with_injection(init, self_=instance, kwargs=additional_kwargs)
private-gpt-1  |   File "/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py", line 1031, in call_with_injection
private-gpt-1  |     dependencies = self.args_to_inject(
private-gpt-1  |                    ^^^^^^^^^^^^^^^^^^^^
private-gpt-1  |   File "/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py", line 91, in wrapper
private-gpt-1  |     return function(*args, **kwargs)
private-gpt-1  |            ^^^^^^^^^^^^^^^^^^^^^^^^^
private-gpt-1  |   File "/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py", line 1079, in args_to_inject
private-gpt-1  |     instance: Any = self.get(interface)
private-gpt-1  |                     ^^^^^^^^^^^^^^^^^^^
private-gpt-1  |   File "/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py", line 91, in wrapper
private-gpt-1  |     return function(*args, **kwargs)
private-gpt-1  |            ^^^^^^^^^^^^^^^^^^^^^^^^^
private-gpt-1  |   File "/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py", line 974, in get
private-gpt-1  |     provider_instance = scope_instance.get(interface, binding.provider)
private-gpt-1  |                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
private-gpt-1  |   File "/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py", line 91, in wrapper
private-gpt-1  |     return function(*args, **kwargs)
private-gpt-1  |            ^^^^^^^^^^^^^^^^^^^^^^^^^
private-gpt-1  |   File "/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py", line 800, in get
private-gpt-1  |     instance = self._get_instance(key, provider, self.injector)
private-gpt-1  |                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
private-gpt-1  |   File "/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py", line 811, in _get_instance
private-gpt-1  |     return provider.get(injector)
private-gpt-1  |            ^^^^^^^^^^^^^^^^^^^^^^
private-gpt-1  |   File "/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py", line 264, in get
private-gpt-1  |     return injector.create_object(self._cls)
private-gpt-1  |            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
private-gpt-1  |   File "/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py", line 998, in create_object
private-gpt-1  |     self.call_with_injection(init, self_=instance, kwargs=additional_kwargs)
private-gpt-1  |   File "/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py", line 1031, in call_with_injection
private-gpt-1  |     dependencies = self.args_to_inject(
private-gpt-1  |                    ^^^^^^^^^^^^^^^^^^^^
private-gpt-1  |   File "/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py", line 91, in wrapper
private-gpt-1  |     return function(*args, **kwargs)
private-gpt-1  |            ^^^^^^^^^^^^^^^^^^^^^^^^^
private-gpt-1  |   File "/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py", line 1079, in args_to_inject
private-gpt-1  |     instance: Any = self.get(interface)
private-gpt-1  |                     ^^^^^^^^^^^^^^^^^^^
private-gpt-1  |   File "/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py", line 91, in wrapper
private-gpt-1  |     return function(*args, **kwargs)
private-gpt-1  |            ^^^^^^^^^^^^^^^^^^^^^^^^^
private-gpt-1  |   File "/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py", line 974, in get
private-gpt-1  |     provider_instance = scope_instance.get(interface, binding.provider)
private-gpt-1  |                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
private-gpt-1  |   File "/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py", line 91, in wrapper
private-gpt-1  |     return function(*args, **kwargs)
private-gpt-1  |            ^^^^^^^^^^^^^^^^^^^^^^^^^
private-gpt-1  |   File "/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py", line 800, in get
private-gpt-1  |     instance = self._get_instance(key, provider, self.injector)
private-gpt-1  |                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
private-gpt-1  |   File "/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py", line 811, in _get_instance
private-gpt-1  |     return provider.get(injector)
private-gpt-1  |            ^^^^^^^^^^^^^^^^^^^^^^
private-gpt-1  |   File "/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py", line 264, in get
private-gpt-1  |     return injector.create_object(self._cls)
private-gpt-1  |            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
private-gpt-1  |   File "/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py", line 998, in create_object
private-gpt-1  |     self.call_with_injection(init, self_=instance, kwargs=additional_kwargs)
private-gpt-1  |   File "/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py", line 1040, in call_with_injection
private-gpt-1  |     return callable(*full_args, **dependencies)
private-gpt-1  |            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
private-gpt-1  |   File "/home/worker/app/private_gpt/components/vector_store/vector_store_component.py", line 114, in __init__
private-gpt-1  |     client = QdrantClient(
private-gpt-1  |              ^^^^^^^^^^^^^
private-gpt-1  |   File "/home/worker/app/.venv/lib/python3.11/site-packages/qdrant_client/qdrant_client.py", line 117, in __init__
private-gpt-1  |     self._client = QdrantLocal(
private-gpt-1  |                    ^^^^^^^^^^^^
private-gpt-1  |   File "/home/worker/app/.venv/lib/python3.11/site-packages/qdrant_client/local/qdrant_local.py", line 66, in __init__
private-gpt-1  |     self._load()
private-gpt-1  |   File "/home/worker/app/.venv/lib/python3.11/site-packages/qdrant_client/local/qdrant_local.py", line 97, in _load
private-gpt-1  |     os.makedirs(self.location, exist_ok=True)
private-gpt-1  |   File "<frozen os>", line 215, in makedirs
private-gpt-1  |   File "<frozen os>", line 225, in makedirs
private-gpt-1  | PermissionError: [Errno 13] Permission denied: 'local_data/private_gpt'
^CGracefully stopping... (press Ctrl+C again to force)
[+] Stopping 2/2
 ✔ Container private-gpt-private-gpt-1  Stopped                                                                                                                                   0.3s
 ✔ Container private-gpt-ollama-1       Stopped  

Steps to Reproduce

  1. Clone the repo
  2. docker-compose build
  3. docker-compose up

Expected Behavior

It should just run

Actual Behavior

Error, as reported above

Environment

Running on an Asustor router, docker 25.0.5

Additional Information

No response

Version

latest

Setup Checklist

  • Confirm that you have followed the installation instructions in the project’s documentation.
  • Check that you are using the latest version of the project.
  • Verify disk space availability for model storage and data processing.
  • Ensure that you have the necessary permissions to run the project.

NVIDIA GPU Setup Checklist

  • Check that the all CUDA dependencies are installed and are compatible with your GPU (refer to CUDA's documentation)
  • Ensure an NVIDIA GPU is installed and recognized by the system (run nvidia-smi to verify).
  • Ensure proper permissions are set for accessing GPU resources.
  • Docker users - Verify that the NVIDIA Container Toolkit is configured correctly (e.g. run sudo docker run --rm --gpus all nvidia/cuda:11.0.3-base-ubuntu20.04 nvidia-smi)
@salimfadhley salimfadhley added the bug Something isn't working label Jul 23, 2024
@salimfadhley
Copy link
Author

salimfadhley commented Jul 23, 2024

Just a request: One thing that would make this project way more accessible is if it could work in Portainer, a widely used tool for launching containerized applications. Asustor NAS servers support this application, and for apps that support it, launching a new service might be as simple as just pasting in a docker-compose file, or providing a path to a git repo with a docker-compose file. Portainer can take care of most things like building the images (if no official images exist), and then provides a handy control panel so the admin can browse logs.

It really helps if the default Docker config "just works". Failing that, could we have a section on how to install with Docker? I couldn't see a mention of Docker in the manual.

@jaluma
Copy link
Collaborator

jaluma commented Aug 8, 2024

Have you tried latest changes? It's like an read/write problem in your host machine.
We've updated our docker-compose compatibility to have different profiles.

@theodufort
Copy link

Same error here

@iguy0
Copy link

iguy0 commented Aug 30, 2024

Same issue here. I tried to change the UID/GID to match the user i have on my linux system(1000) and got this when building:
=> ERROR [private-gpt-ollama app 1/10] RUN addgroup --system --gid ${GROUP_GID} worker

I then modified the Dockerfile.ollama and the user creation looks like this:
31 # Define the User ID (UID) for the non-root user
32 # UID 100 is chosen to avoid conflicts with existing system users
33 ARG UID=1000
34
35 # Define the Group ID (GID) for the non-root user
36 # GID 65534 is often used for the 'nogroup' or 'nobody' group
37 ARG GID=1000
38 ARG UGNAME=worker
39
40 RUN addgroup --system --gid ${GID} ${UGNAME}
41
42 RUN adduser --system --disabled-password --home /home/${UGNAME} \
43 --uid ${UID} --ingroup ${UGNAME} ${UGNAME}
44
45 #RUN adduser --system --gid ${GID} --uid ${UID} --home /home/worker worker
46 WORKDIR /home/worker/app

It worked!

@jaluma
Copy link
Collaborator

jaluma commented Sep 11, 2024

Sorry for the delay :(

I just pushed a generic fix in #2059.
If somebody can check if this issue is fixed, it would be nice!

@salimfadhley @iguy0 @theodufort @vilaca

@jaluma jaluma self-assigned this Sep 11, 2024
@macornwell
Copy link

macornwell commented Sep 14, 2024

I am having this issue right now. It's not a permissions issue. There is nothing in the folder. ./local_data is empty.

If i copy paste private_gpt into it (I am just hacking, I have no idea if thats right, forgive me), it fails trying to find another file.

Not permissions, there is no file. This is coming directly from a brand new "git clone" and then "docker compose up" kind of installation.

@jaluma
Copy link
Collaborator

jaluma commented Sep 16, 2024

@macornwell
But a folder also has permissions. If the folder has the wrong permissions, you won't be able to write to it. You can see this by doing an ls -al. Have you tried to use my potential fix? You can do with the command: git checkout fix/docker-permissions

@macornwell
Copy link

I did a checkout of fix/docker-permissions and it went through. I think this is the success you were looking for.

@jaluma
Copy link
Collaborator

jaluma commented Sep 16, 2024

@macornwell
Thank you so much for the confirmation!

@jaluma jaluma closed this as completed Sep 16, 2024
@twnaing
Copy link

twnaing commented Nov 9, 2024

I do not see fix/docker-permissions branch

It is in the PR #2059 not available in v0.6.2. Use main branch.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

6 participants