Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Windows install instructions #17

Closed
IntoThatGoodNight opened this issue Mar 16, 2023 · 17 comments
Closed

Windows install instructions #17

IntoThatGoodNight opened this issue Mar 16, 2023 · 17 comments

Comments

@IntoThatGoodNight
Copy link

These instructions will allow you to finetune on windows.

oobabooga/text-generation-webui#147 (comment)

@tloen tloen closed this as completed in 88bfa8f Mar 16, 2023
@tloen
Copy link
Owner

tloen commented Mar 16, 2023

Thanks!

@niclimcy
Copy link

@tloen I made a hardcoded pip package of bitsandbytes here: https://github.com/nicknitewolf/bitsandbytes , Windows users can just run pip install git+https://github.com/nicknitewolf/bitsandbytes.git

Also may I know if the trained weights in inference are updated?

@Anjlo
Copy link

Anjlo commented Mar 18, 2023

I tried everything to get it to work with my 3080 (with 10GB VRAM) but it defaults to the CPU each time, unless it's incompatible for this iteration? I replaced the path to .dll but it resorts to 'Could not find module 'C:\Users\X\miniconda3\lib\site-packages\bitsandbytes\libbitsandbytes_cuda116.dll''

I ran the 4bit versions previously.

Would love to get this to work.

@niclimcy
Copy link

I tried everything to get it to work with my 3080 (with 10GB VRAM) but it defaults to the CPU each time, unless it's incompatible for this iteration? I replaced the path to .dll but it resorts to 'Could not find module 'C:\Users\X\miniconda3\lib\site-packages\bitsandbytes\libbitsandbytes_cuda116.dll''

I ran the 4bit versions previously.

Would love to get this to work.

not exactly sure how legal it is to upload nvidia's dlls but you need to have these few dlls in the folder too:

This is for the new one ill be uploading soon

cudart64_12.dll
cublas64_12.dll
cublasLt64_12.dll
cusparse64_12.dll
nvJitLink_120_0.dll

@jimdinunzio
Copy link

jimdinunzio commented Mar 22, 2023

cudart64_12.dll

EDIT 2: One last thing under windows is to install the GPU version of torch which is not the default. You can go to https://pytorch.org/ to select the exact version you want and it generates the install command:

  `pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118`

EDIT: I installed @nicknitewolf's build (pip install git+https://github.com/nicknitewolf/bitsandbytes.git) which requires CUDA Toolkit 12.1.

Did you install CUDA Toolkit 12.1? Before I did I got this error and the "Or one of its dependencies" was the problem.
After installing the toolkit, the error is gone.

CUDA SETUP: Loading binary C:\Users\jim\anaconda3\envs\alpaca\lib\site-packages\bitsandbytes\bitsandbytes_cuda120.dll...Could not find module 'C:\Users\jim\anaconda3\envs\alpaca\lib\site-packages\bitsandbytes\bitsandbytes_cuda120.dll' (or one of its dependencies). Try using the full path with constructor syntax.

@Anjlo
Copy link

Anjlo commented Mar 22, 2023

I tried troubleshooting it with different versions of CUDA but I couldn't get this working on Windows. I did the exact same thing in WSL2 and it ran functionally with CUDA 11.7.

@jimdinunzio
Copy link

@Anjlo,
Running under windows, If I have made it to the progress bar and it is progressing, am I good? or did you encounter failures later?

FutureWarning: This implementation of AdamW is deprecated and will be removed in a future version. Use the PyTorch implementation torch.optim.AdamW instead, or set no_deprecation_warning=True to disable this warning
1%|█▍ | 16/1167 [17:11<20:24:28, 63.83s/it]

@Paillat-dev
Copy link

Ok. Got it working. If anyone wants to know how, here's what I have done:

  1. if not done already, install miniconda3 here
  2. open miniconda
  3. create a conda python 3.10 env. with
    conda create -n finetune python=3.10.9
    and open it with
    conda activate finetune
  4. Move to any directory of your choice where you want to install alpaca-lora
  5. Clone the repository with
    git clone https://github.com/tloen/alpaca-lora.git
    and move to that folder with
    cd alpaca-lora
  6. Install requirements with
    pip install -r requirements.txt
    conda install pytorch torchvision torchaudio pytorch-cuda=11.7 -c pytorch -c nvidia
    pip install tensorboard
  7. Install cuda 11.7 and 12.1 here and here
  8. Reboot your pc
  9. Go into C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.1\bin in your files explorer
  10. Select the following files:
cudart64_12.dll
cublas64_12.dll
cublasLt64_12.dll
cusparse64_12.dll
nvJitLink_120_0.dll

and copy them
12. Go to C:\Users\YOUR USER HERE\.conda\envs\finetune\Lib\site-packages\bitsandbytes
and paste the files in here.
Also download this file in the same folder.
13. Move to the cuda_setup folder and open the main.py file.
search for:
if not torch.cuda.is_available(): return 'libsbitsandbytes_cpu.so', None, None, None, None
replace with:
if torch.cuda.is_available(): return 'libbitsandbytes_cuda116.dll', None, None, None, None
search for this twice:
self.lib = ct.cdll.LoadLibrary(binary_path)
replace with:
self.lib = ct.cdll.LoadLibrary(str(binary_path))
save and close the file
14. re open miniconda3 and run
conda activate finetune
15. move to the folder created earlier and then to the alpaca-lora folder
16. It's installed! You can now run the finetune commands in here.

@norvin2002
Copy link

@Paillat-dev thank you for the steps. I attempted following the steps but pytorch seems not compatible with python 3.10

Specifications:

  • torchaudio -> python[version='>=2.7,<2.8.0a0|>=3.5,<3.6.0a0']

Your python: python=3.10

If python is on the left-most side of the chain, that's the version you've asked for.
When python appears to the right, that indicates that the thing on the left is somehow
not available for the python version you are constrained to. Note that conda will not
change your python version to a different minor version unless you explicitly specify
that.

The following specifications were found to be incompatible with each other:

Output in format: Requested package -> Available versions

Package pytorch conflicts for:
torchvision -> pytorch[version='1.10.0|1.10.1|1.10.2|1.11.0|1.12.0|1.12.1|1.13.0|1.13.1|2.0.0|1.9.1|1.9.0|1.8.1|1.8.0|1.7.1|1.7.0|1.6.0|1.5.1']
torchaudio -> pytorch[version='1.10.0|1.10.1|1.10.2|1.11.0|1.12.0|1.12.1|1.13.0|1.13.1|2.0.0|1.9.1|1.9.0|1.8.1|1.8.0|1.7.1|1.7.0|1.6.0']

Package pytorch-cuda conflicts for:
torchaudio -> pytorch==2.0.0 -> pytorch-cuda[version='>=11.6,<11.7|>=11.7,<11.8|>=11.8,<11.9']
torchvision -> pytorch-cuda[version='11.6.|11.7.|11.8.']
torchaudio -> pytorch-cuda[version='11.6.
|11.7.|11.8.']
torchvision -> pytorch==2.0.0 -> pytorch-cuda[version='>=11.6,<11.7|>=11.7,<11.8|>=11.8,<11.9']

Package requests conflicts for:
torchvision -> requests
python=3.10 -> pip -> requests

Package setuptools conflicts for:
python=3.10 -> pip -> setuptools
pytorch -> jinja2 -> setuptools

Perhaps we should create env with different python version?

@Paillat-dev
Copy link

Paillat-dev commented Mar 31, 2023 via email

@whitesay
Copy link

I don't know how to solve this problem. Can anyone help me? Thank you very much!

RTX1080Ti,cuda12.1&cuda11.7,pytorch-cuda=11.7

Loading checkpoint shards: 0%| | 0/33 [00:00<?, ?it/s]Error no kernel image is available for execution on the device at line 479 in file D:\ai\tool\bitsandbytes\csrc\ops.cu

@ShinokuS
Copy link

ShinokuS commented May 5, 2023

@Paillat-dev I will supplement this guide, because I ran into some problems during the launch of finetune.

  1. Install and open miniconda3.
  2. conda create -n finetune python=3.10.9
  3. conda activate finetune
  4. conda install git
  5. cd in a directory convenient for you and run git clone https://github.com/tloen/alpaca-lora.git
  6. cd alpaca-lora
  7. Install packages:
    pip install -r requirements.txt
    pip uninstall bitsandbytes - need an older version of bitsandbytes
    pip install bitsandbytes==0.37.2
    pip uninstall transformers - need another version for LLaMa
    pip install -q git+https://github.com/zphang/transformers@c3dc391
    pip install chardet
    conda install cchardet - because I got an error with c++14 via pip
    conda install pytorch torchvision torchaudio pytorch-cuda=11.7 -c pytorch -c nvidia
    pip install tensorboard
  8. Install cuda 11.7 and 12.1
  9. Reboot your pc
  10. Go to C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.1\bin and copy this files:
cudart64_12.dll
cublas64_12.dll
cublasLt64_12.dll
cusparse64_12.dll
nvJitLink_120_0.dll

Paste in here: C:\Users\YOUR USER HERE.conda\envs\finetune\Lib\site-packages\bitsandbytes
Also download https://github.com/DeXtmL/bitsandbytes-win-prebuilt/blob/main/libbitsandbytes_cuda116.dll and paste here.
11. Move to the cuda_setup folder and open the main.py file.
search for:
if not torch.cuda.is_available(): return 'libsbitsandbytes_cpu.so', None, None, None, None
replace with:
if torch.cuda.is_available(): return 'libbitsandbytes_cuda116.dll', None, None, None, None
search for this twice:
self.lib = ct.cdll.LoadLibrary(binary_path)
replace with:
self.lib = ct.cdll.LoadLibrary(str(binary_path))
save and close the file
12. re open miniconda3 and run
conda activate finetune
13. cd to alpaca-lora and run finetune.py with your model.

@Entretoize
Copy link

What do you mean by "the cuda_setup folder" ?

@whitesay
Copy link

whitesay commented May 19, 2023 via email

@Entretoize
Copy link

I found thanks to errors, it works, thanks !

@d0wnf4ll
Copy link

d0wnf4ll commented Jun 9, 2023

@Paillat-dev I will supplement this guide, because I ran into some problems during the launch of finetune.

  1. Install and open miniconda3.
  2. conda create -n finetune python=3.10.9
  3. conda activate finetune
  4. conda install git
  5. cd in a directory convenient for you and run git clone https://github.com/tloen/alpaca-lora.git
  6. cd alpaca-lora
  7. Install packages:
    pip install -r requirements.txt
    pip uninstall bitsandbytes - need an older version of bitsandbytes
    pip install bitsandbytes==0.37.2
    pip uninstall transformers - need another version for LLaMa
    pip install -q git+https://github.com/zphang/transformers@c3dc391
    pip install chardet
    conda install cchardet - because I got an error with c++14 via pip
    conda install pytorch torchvision torchaudio pytorch-cuda=11.7 -c pytorch -c nvidia
    pip install tensorboard
  8. Install cuda 11.7 and 12.1
  9. Reboot your pc
  10. Go to C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.1\bin and copy this files:
cudart64_12.dll
cublas64_12.dll
cublasLt64_12.dll
cusparse64_12.dll
nvJitLink_120_0.dll

Paste in here: C:\Users\YOUR USER HERE.conda\envs\finetune\Lib\site-packages\bitsandbytes Also download https://github.com/DeXtmL/bitsandbytes-win-prebuilt/blob/main/libbitsandbytes_cuda116.dll and paste here. 11. Move to the cuda_setup folder and open the main.py file. search for: if not torch.cuda.is_available(): return 'libsbitsandbytes_cpu.so', None, None, None, None replace with: if torch.cuda.is_available(): return 'libbitsandbytes_cuda116.dll', None, None, None, None search for this twice: self.lib = ct.cdll.LoadLibrary(binary_path) replace with: self.lib = ct.cdll.LoadLibrary(str(binary_path)) save and close the file 12. re open miniconda3 and run conda activate finetune 13. cd to alpaca-lora and run finetune.py with your model.

After following @ShinokuS guide I was still experiencing issues with the bitsandbytes library.
Though running the following from the conda terminal solved it:

pip install bitsandbytes-windows

EDIT: also the right transformer version can be downloaded with
pip uninstall transformers
pip install git+https://github.com/zphang/transformers.git@llama_push

@whitesay
Copy link

whitesay commented Jun 9, 2023 via email

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests