-
Notifications
You must be signed in to change notification settings - Fork 2.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Windows install instructions #17
Comments
Thanks! |
@tloen I made a hardcoded pip package of bitsandbytes here: https://github.com/nicknitewolf/bitsandbytes , Windows users can just run Also may I know if the trained weights in inference are updated? |
I tried everything to get it to work with my 3080 (with 10GB VRAM) but it defaults to the CPU each time, unless it's incompatible for this iteration? I replaced the path to .dll but it resorts to 'Could not find module 'C:\Users\X\miniconda3\lib\site-packages\bitsandbytes\libbitsandbytes_cuda116.dll'' I ran the 4bit versions previously. Would love to get this to work. |
not exactly sure how legal it is to upload nvidia's dlls but you need to have these few dlls in the folder too: This is for the new one ill be uploading soon
|
EDIT 2: One last thing under windows is to install the GPU version of torch which is not the default. You can go to https://pytorch.org/ to select the exact version you want and it generates the install command:
EDIT: I installed @nicknitewolf's build (pip install git+https://github.com/nicknitewolf/bitsandbytes.git) which requires CUDA Toolkit 12.1. Did you install CUDA Toolkit 12.1? Before I did I got this error and the "Or one of its dependencies" was the problem. CUDA SETUP: Loading binary C:\Users\jim\anaconda3\envs\alpaca\lib\site-packages\bitsandbytes\bitsandbytes_cuda120.dll...Could not find module 'C:\Users\jim\anaconda3\envs\alpaca\lib\site-packages\bitsandbytes\bitsandbytes_cuda120.dll' (or one of its dependencies). Try using the full path with constructor syntax. |
I tried troubleshooting it with different versions of CUDA but I couldn't get this working on Windows. I did the exact same thing in WSL2 and it ran functionally with CUDA 11.7. |
@Anjlo, FutureWarning: This implementation of AdamW is deprecated and will be removed in a future version. Use the PyTorch implementation torch.optim.AdamW instead, or set |
Ok. Got it working. If anyone wants to know how, here's what I have done:
and copy them |
@Paillat-dev thank you for the steps. I attempted following the steps but pytorch seems not compatible with python 3.10 Specifications:
Your python: python=3.10 If python is on the left-most side of the chain, that's the version you've asked for. The following specifications were found to be incompatible with each other: Output in format: Requested package -> Available versions Package pytorch conflicts for: Package pytorch-cuda conflicts for: Package requests conflicts for: Package setuptools conflicts for: Perhaps we should create env with different python version? |
Followed these steps and it worked. I hadn't had any incompatibility warnings, could you try running it ignoring them?Le 31 mars 2023 04:47, Norvin ***@***.***> a écrit :
@Paillat-dev thank you for the steps. I attempted following the steps but pytorch seems not compatible with python 3.10
Specifications:
torchaudio -> python[version='>=2.7,<2.8.0a0|>=3.5,<3.6.0a0']
Your python: python=3.10
If python is on the left-most side of the chain, that's the version you've asked for.
When python appears to the right, that indicates that the thing on the left is somehow
not available for the python version you are constrained to. Note that conda will not
change your python version to a different minor version unless you explicitly specify
that.
The following specifications were found to be incompatible with each other:
Output in format: Requested package -> Available versions
Package pytorch conflicts for:
torchvision -> pytorch[version='1.10.0|1.10.1|1.10.2|1.11.0|1.12.0|1.12.1|1.13.0|1.13.1|2.0.0|1.9.1|1.9.0|1.8.1|1.8.0|1.7.1|1.7.0|1.6.0|1.5.1']
torchaudio -> pytorch[version='1.10.0|1.10.1|1.10.2|1.11.0|1.12.0|1.12.1|1.13.0|1.13.1|2.0.0|1.9.1|1.9.0|1.8.1|1.8.0|1.7.1|1.7.0|1.6.0']
Package pytorch-cuda conflicts for:
torchaudio -> pytorch==2.0.0 -> pytorch-cuda[version='>=11.6,<11.7|>=11.7,<11.8|>=11.8,<11.9']
torchvision -> pytorch-cuda[version='11.6.|11.7.|11.8.']
torchaudio -> pytorch-cuda[version='11.6.|11.7.|11.8.']
torchvision -> pytorch==2.0.0 -> pytorch-cuda[version='>=11.6,<11.7|>=11.7,<11.8|>=11.8,<11.9']
Package requests conflicts for:
torchvision -> requests
python=3.10 -> pip -> requests
Package setuptools conflicts for:
python=3.10 -> pip -> setuptools
pytorch -> jinja2 -> setuptools
Perhaps we should create env with different python version?
—Reply to this email directly, view it on GitHub, or unsubscribe.You are receiving this because you were mentioned.Message ID: ***@***.***>
|
I don't know how to solve this problem. Can anyone help me? Thank you very much! RTX1080Ti,cuda12.1&cuda11.7,pytorch-cuda=11.7 Loading checkpoint shards: 0%| | 0/33 [00:00<?, ?it/s]Error no kernel image is available for execution on the device at line 479 in file D:\ai\tool\bitsandbytes\csrc\ops.cu |
@Paillat-dev I will supplement this guide, because I ran into some problems during the launch of finetune.
Paste in here: C:\Users\YOUR USER HERE.conda\envs\finetune\Lib\site-packages\bitsandbytes |
What do you mean by "the cuda_setup folder" ? |
已收到您的信件!
|
I found thanks to errors, it works, thanks ! |
After following @ShinokuS guide I was still experiencing issues with the bitsandbytes library.
EDIT: also the right transformer version can be downloaded with |
已收到您的信件!
|
These instructions will allow you to finetune on windows.
oobabooga/text-generation-webui#147 (comment)
The text was updated successfully, but these errors were encountered: