-
Notifications
You must be signed in to change notification settings - Fork 645
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Warning: "The installed version of bitsandbytes was compiled without GPU support." #112
Comments
I managed to get it work using docker + vscode Dev Containers extension. Here is a repo that I have made in case someone else have the same issue and is looking for a quick solution. This repo can be used as a basis for your project. |
hi @sbrnaderi what was the change to fix the issue? I had to use another version (https://pypi.org/project/bitsandbytes-cuda117/) in order for it to work on linux |
hi @abacaj , in my dockerfile, I start from the latest pytorch docker image and install the bitsandbytes using Initially, I tried to install pytorch and bitsandbytes on Ubuntu 18.04 (on windows WSL) and I got the error that I mentioned above. Changing the bitsandbytes cuda version to the version that I got from |
I am getting the same error with the following Dockerfile:
|
try |
It helped, but this way |
Another workaround is to symlink libcuda into your env
|
thanks alot. I successed run it on wsl2 |
@AbstractQbit That didn't work for me... |
I have the same problem, and it seems that none of solutions above works for me. My working environment is: |
It seems to work after I replace lib/python3.8/site-packages/bitsandbytes/lib/bitsandbytes_cpu.so with lib/python3.8/site-packages/bitsandbytes/lib/bitsandbytes_cuda112.so |
Hi @sbrnaderi Thank you so much for sharing the Dockerfile, in my case, to get my docker container running, I have to set |
Same issue here, fresh installation. I don't understand, why a tool used for machine learning has it's default version compiled without GPU support? |
It could be the wrong cuda version or it cannot find the correct cuda path. |
Thanks! I reinstalled CUDA and it worked on Fedora. The warning message is misleading, means completely different than what is actually happening in this scenario. This helped me: https://unix.stackexchange.com/questions/716248/how-do-i-use-cuda-toolkit-nvcc-11-7-1-on-fedora-36 He's installing from 35 repo to 36, I installed from 36 repo to 37 and it worked. |
I changed this dockerfile a bit and still ran into the same issue. I'm seeing problems in containers built off this jupyterlab repo on more than one VM with P100s and virtualized A100s. EDIT - The VM issues I was seeing were related to permissions of switching users within the jupyter images and unrelated to bitsandbytes. This dockerfile does still have issues even when running "python bitsandbytes -m" instead of the check_bnb_install.py script.
docker build -t bitsandbytes_test:latest .
|
Hmm. I'm stuck on a WSL installation on the last step, too. The only thing throwing out errors is the bitsandbytes package - that either has no GPU support (which is hilarious to me in this case) - or is deprecated. The symlink workaround is something I'm not overly fond of doing. How to replace the deprecated file once you "updated" it in WSL (running latest ubuntu as a distro)? |
This solved it for me: |
tried a lot of stuff but I still have this issue (on a dockerfile based on
Not sure why is so hard to use a tool made for CUDA on a CUDA-enabled machine, is there a specific reason? I'd love to help but I am a noob at this stuff |
This is what worked for me, but because I installed torch with cuda 11.7 I assume I have to change it to "bitsandbytes-cuda117". No more warning so for now it seems that it was the solution. Edit: I got a new error message after initializing the environment again. Apparently this issue has to do with Pytorch just updating to 2.0, so the solution that finally worked for me was installing the last version of pytorch prior to 2.0 as said here oobabooga/text-generation-webui#400 (comment) |
I think I got why is not working and how to fix it people So,
|
Hi there! Could anybody solve this issue? If someone has a Dockerfile maybe could provide it here to understand how to solve this annoying issue. Thanks a lot. |
|
helped for me, although this standard path should be searched by default. |
I strongly believe it's better to just use docker so you don't messed up your host cuda version (expecially if you are using windows (🤮 ) and you game on it) |
Does bitsandbytes require the nvidia development container as a base instead of runtime? This container works:
However using the base image It seems like the pytorch image I was using before is also starting from the nvidia runtime image. |
@njacobson-nci the |
@FrancescoSaverioZuppichini Same thing in cuda 11.8
|
Same here, its working on devel image, but on runtime image its failing. |
Possible solution: This will add the path specific to WSL into the search path. |
wow this issue has been an amazing timesuck especially when I am trying to place this in a gpu docker image. I hope this gets streamlined in the next few weeks. |
i think you should be able to fix it if you use GPU at docker build time, something like this: https://stackoverflow.com/a/61737404/13156539 |
I am trying to create the docker container in windows with the above dockerfile. First nvidia-smi shows CUDA Version: 11.6. So that is why I changed above cuda:11.8.0 to cuda:11.6.0 and https://download.pytorch.org/whl/cu118 to https://download.pytorch.org/whl/cu116. After building the Image and then running with docker run -it --name containername --gpus all -p 3000:3000 imagename I get the following:
Has anyone experienced this? |
windows + wsl only this works to me |
it is but you just need to use a container with the correct cuda version |
In my case I had to change some versions in: |
that works for me |
I saw this warning when doing |
What do you mean [path to your env here] ? |
fam, do not do weird symlink on your OS, use docker! |
I'm having the same issue but I'm working on a Databricks notebook: These are the versions of the packages I have: And as I mentioned I'm running the code on Databricks Notebook. For me aswell, torch.cuda.is_available() returns True. None of the solutions worked for me, can someone please help? And while importing bitandbytes I get this output:
And the output for
|
I ran into this as well, in my case it turned out to be I installed a pytorch without GPU support. Reinstalling pytorch for GPU solved it. |
Bitsandbytes was not supported windows before, but my method can support windows.(yuhuang) 3 J:\StableDiffusion\sdwebui\py310\python.exe -m pip uninstall bitsandbytes-windows 4 J:\StableDiffusion\sdwebui\py310\python.exe -m pip install https://github.com/jllllll/bitsandbytes-windows-webui/releases/download/wheels/bitsandbytes-0.41.1-py3-none-win_amd64.whl Replace your SD venv directory file(python.exe Folder) here(J:\StableDiffusion\sdwebui\py310) |
OR you are Linux distribution (Ubuntu, MacOS, etc.)system ,AND CUDA Version: 11.X. Bitsandbytes can support ubuntu.(yuhuang) 3 J:\StableDiffusion\sdwebui\py310\python.exe -m pip uninstall bitsandbytes-windows 4 J:\StableDiffusion\sdwebui\py310\python.exe -m pip install https://github.com/TimDettmers/bitsandbytes/releases/download/0.41.0/bitsandbytes-0.41.0-py3-none-any.whl Replace your SD venv directory file(python.exe Folder) here(J:\StableDiffusion\sdwebui\py310) |
[SOLVED] it turns out that Databricks Cluster which were using multi-user cluster and it only has user level acess, once I got a single-user cluster it worked fine because it has admin level acess. |
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. |
where could i find the cuda112 file |
This works for me as python -m bitsandbytes reports success. However, when I run the real python script, it says Unknown CUDA exception! Please check your CUDA install. It might also be that your GPU is too old and keeps using the cpu version. |
kept getting this issue even after deleting and recreating conda environment, reinstalling bitsandbytes, etc. Solution for me ended up being just: |
I found most of the issues here are related to WSL. What if I haven't installed WSL? I am on Debian 11 and cudda 12.1 |
I am sure this will be of not help to most of people, though, the thing that worked for me was:
|
Issue
When I run the following line of code:
pipe = pipeline(model=name, model_kwargs= {"device_map": "auto", "load_in_8bit": True}, max_new_tokens=max_new_tokens)
I get the following warning message:
"The installed version of bitsandbytes was compiled without GPU support"
and the following error at the end:
AttributeError: /miniconda3/envs/bits/lib/python3.10/site-packages/bitsandbytes/libbitsandbytes_cpu.so: undefined symbol: cget_col_row_stats
My setup
Ubuntu 18.04 on windows WSL
Cuda version: 11.4 (I can confirm this with
nvidia-smi
command)Pytorch installed using conda:
conda install pytorch torchvision torchaudio pytorch-cuda=11.6 -c pytorch -c nvidia
installed the following python packages after installing pytorch:
My hardware
NVIDIA GPU RTX2060 SUPER (8GB)
AMD CPU (12 cores)
My investigations so far
torch.cuda.is_available()
--> returns TrueThis returns the following error:
AttributeError: /miniconda3/envs/bits/lib/python3.10/site-packages/bitsandbytes/libbitsandbytes_cpu.so: undefined symbol: cadam32bit_g32
if I run
I get:
The text was updated successfully, but these errors were encountered: