Replies: 3 comments 2 replies
-
i am having the same problem. for more information, that's what i did in one terminal window, spin up ollama container $ docker run --rm -it -e OLLAMA_DEBUG=1 -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama
time=2024-03-25T11:32:37.135Z level=INFO source=images.go:806 msg="total blobs: 0"
time=2024-03-25T11:32:37.135Z level=INFO source=images.go:813 msg="total unused blobs removed: 0"
time=2024-03-25T11:32:37.135Z level=INFO source=routes.go:1110 msg="Listening on [::]:11434 (version 0.1.29)"
time=2024-03-25T11:32:37.135Z level=INFO source=payload_common.go:112 msg="Extracting dynamic libraries to /tmp/ollama3513124258/runners ..."
time=2024-03-25T11:32:38.984Z level=INFO source=payload_common.go:139 msg="Dynamic LLM libraries [cpu cuda_v11]"
time=2024-03-25T11:32:38.984Z level=DEBUG source=payload_common.go:140 msg="Override detection logic by setting OLLAMA_LLM_LIBRARY"
time=2024-03-25T11:32:38.984Z level=INFO source=gpu.go:77 msg="Detecting GPU type"
time=2024-03-25T11:32:38.984Z level=INFO source=gpu.go:191 msg="Searching for GPU management library libnvidia-ml.so"
time=2024-03-25T11:32:38.984Z level=DEBUG source=gpu.go:209 msg="gpu management search paths: [/usr/local/cuda/lib64/libnvidia-ml.so* /usr/lib/x86_64-linux-gnu/nvidia/current/libnvidia-ml.so* /usr/lib/x86_64-linux-gnu/libnvidia-ml.so* /usr/lib/wsl/lib/libnvidia-ml.so* /usr/lib/wsl/drivers/*/libnvidia-ml.so* /opt/cuda/lib64/libnvidia-ml.so* /usr/lib*/libnvidia-ml.so* /usr/local/lib*/libnvidia-ml.so* /usr/lib/aarch64-linux-gnu/nvidia/current/libnvidia-ml.so* /usr/lib/aarch64-linux-gnu/libnvidia-ml.so* /opt/cuda/targets/x86_64-linux/lib/stubs/libnvidia-ml.so* /usr/local/nvidia/lib/libnvidia-ml.so* /usr/local/nvidia/lib64/libnvidia-ml.so*]"
time=2024-03-25T11:32:38.984Z level=INFO source=gpu.go:237 msg="Discovered GPU libraries: []"
time=2024-03-25T11:32:38.984Z level=INFO source=cpu_common.go:18 msg="CPU does not have vector extensions"
time=2024-03-25T11:32:38.984Z level=DEBUG source=amd_linux.go:263 msg="amdgpu driver not detected /sys/module/amdgpu"
time=2024-03-25T11:32:38.984Z level=INFO source=routes.go:1133 msg="no GPU detected" i modified the shell_gpt configuration to ollama as described in docs and installed litellm and started shell_gpt in another terminal window $ python --version
Python 3.12.2
$ cat requirements.txt
litellm==1.24.5
shell_gpt==1.4.0
$ sgpt "Who are you?"
...
│ in content │
│ │
│ 567 │ @property │
│ 568 │ def content(self) -> bytes: │
│ 569 │ │ if not hasattr(self, "_content"): │
│ ❱ 570 │ │ │ raise ResponseNotRead() │
│ 571 │ │ return self._content │
│ 572 │ │
│ 573 │ @property │
│ │
│ ╭───────────── locals ──────────────╮ │
│ │ self = <Response [404 Not Found]> │ │
│ ╰───────────────────────────────────╯ │
╰──────────────────────────────────────────────────────────────────────────────────────────────────╯
ResponseNotRead: Attempted to access streaming response content, without having called `read()`. in the ollama terminal, i got also an error [GIN] 2024/03/25 - 11:32:47 | 404 | 337.875µs | 172.17.0.1 | POST "/api/generate" i also tried using shell_gpt version 1.3.0 and 1.3.1 as noted in #467 but the error remains. |
Beta Was this translation helpful? Give feedback.
-
Make sure that |
Beta Was this translation helpful? Give feedback.
-
I have the same problem. I am using the correct API_BASE_URL as mentioned above. Here is the output I get: `rhack@localhost:/Data2/Work> sgpt --model ollama/gwen2:latest "Who are you?" ╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮ |
Beta Was this translation helpful? Give feedback.
-
How to fix it?
Beta Was this translation helpful? Give feedback.
All reactions