Replies: 2 comments 2 replies
-
Please add some additional information about my device configuration. GPU: NVIDIA GeForce RTX 3060 |
Beta Was this translation helpful? Give feedback.
0 replies
-
Tabby 0.7.0 is compiled on cuda 11.7 - could you try upgrading your cuda version? |
Beta Was this translation helpful? Give feedback.
2 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I experimented with tabby earlier this year and then shut down my own tabby server for some reason. Recently, tabby added support for C++, and I wanted to use it again. However, when I updated the docker image and downloaded the new TabbyML model, the following problem(CUDA 804)occurred. This seems to be related to the version of my system kernel and the version of the NVIDIA graphics card. , I don’t dare to update easily, so I would like to ask everyone, what is the correct way to do it.
我在今年早些时候,体验了 tabby,之后由于一些原因关闭了我自己的 tabby server。最近,tabby 增加了对 C++ 的支持,我又想来使用一下了,但是当我更新了镜像并下载了模型后,出现了下面的 CUDA 804 问题,这似乎和我的系统内核版本与英伟达显卡版本有关,我不敢轻易的进行更新,因此想来请教一下大家,怎么做才是正确的办法。
This is the tabby error.
[~/.tabby/models/TabbyML/DeepseekCoder-6.7B/ggml] ❱❱❱ docker run -it --gpus all -p 8080:8080 -v $HOME/.tabby:/data tabbyml/tabby serve --model TabbyML/DeepseekCoder-6.7B --device cuda 2023-12-25T07:12:14.372199Z INFO tabby::serve: crates/tabby/src/serve.rs:111: Starting server, this might takes a few minutes... 2023-12-25T07:12:14.376378Z INFO tabby::services::code: crates/tabby/src/services/code.rs:53: Index is ready, enabling server... CUDA error 804 at /root/workspace/crates/llama-cpp-bindings/llama.cpp/ggml-cuda.cu:478: forward compatibility was attempted on non supported HW current device: 0
This is my nvidia-smi information.
And this is my cuda and cudnn libraries
Beta Was this translation helpful? Give feedback.
All reactions