Skip to content

Commit

Permalink
Merge pull request #161 from NanoCode012/fix/peft-setup
Browse files Browse the repository at this point in the history
Fix: Update peft and gptq instruction
  • Loading branch information
NanoCode012 authored Jun 8, 2023
2 parents 6abfd87 + cfff94b commit 04a1b77
Showing 1 changed file with 15 additions and 3 deletions.
18 changes: 15 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -33,6 +33,7 @@
git clone https://github.com/OpenAccess-AI-Collective/axolotl

pip3 install -e .
pip3 install -U git+https://github.com/huggingface/peft.git

accelerate config

Expand All @@ -53,6 +54,7 @@ accelerate launch scripts/finetune.py examples/lora-openllama-3b/config.yml \
docker run --gpus '"all"' --rm -it winglian/axolotl:main-py3.9-cu118-2.0.0
```
- `winglian/axolotl-runpod:main-py3.9-cu118-2.0.0`: for runpod
- `winglian/axolotl-runpod:main-py3.9-cu118-2.0.0-gptq`: for gptq
- `winglian/axolotl:dev`: dev branch (not usually up to date)

Or run on the current files for development:
Expand All @@ -67,9 +69,19 @@ accelerate launch scripts/finetune.py examples/lora-openllama-3b/config.yml \
2. Install pytorch stable https://pytorch.org/get-started/locally/

3. Install python dependencies with ONE of the following:
- `pip3 install -e .` (recommended, supports QLoRA, no gptq/int4 support)
- `pip3 install -e .[gptq]` (next best if you don't need QLoRA, but want to use gptq)
- `pip3 install -e .[gptq_triton]`
- Recommended, supports QLoRA, NO gptq/int4 support
```bash
pip3 install -e .
pip3 install -U git+https://github.com/huggingface/peft.git
```
- gptq/int4 support, NO QLoRA
```bash
pip3 install -e .[gptq]
```
- same as above but not recommended
```bash
pip3 install -e .[gptq_triton]
```

- LambdaLabs
<details>
Expand Down

0 comments on commit 04a1b77

Please sign in to comment.