diff --git a/.nojekyll b/.nojekyll index a3b16d5d9..4e2bd18e5 100644 --- a/.nojekyll +++ b/.nojekyll @@ -1 +1 @@ -03cad029 \ No newline at end of file +f973d95d \ No newline at end of file diff --git a/docs/dataset-formats/index.html b/docs/dataset-formats/index.html index 1e7545553..7e783d96a 100644 --- a/docs/dataset-formats/index.html +++ b/docs/dataset-formats/index.html @@ -351,7 +351,7 @@
To launch on GPU instance (both on-demand and spot instances) on public clouds (GCP, AWS, Azure, Lambda Labs, TensorDock, Vast.ai, and CUDO), you can use dstack.
+Write a job description in YAML as below:
+# dstack.yaml
+type: task
+
+image: winglian/axolotl-cloud:main-20240429-py3.11-cu121-2.2.1
+
+env:
+ - HUGGING_FACE_HUB_TOKEN
+ - WANDB_API_KEY
+
+commands:
+ - accelerate launch -m axolotl.cli.train config.yaml
+
+ports:
+ - 6006
+
+resources:
+ gpu:
+ memory: 24GB..
+ count: 2
then, simply run the job with dstack run
command. Append --spot
option if you want spot instance. dstack run
command will show you the instance with cheapest price across multi cloud services:
pip install dstack
+HUGGING_FACE_HUB_TOKEN=xxx WANDB_API_KEY=xxx dstack run . -f dstack.yaml # --spot
For further and fine-grained use cases, please refer to the official dstack documents and the detailed description of axolotl example on the official repository.
+See examples for quick start. It is recommended to duplicate and modify to your needs. The most important options are:
model
-base_model: ./llama-7b-hf # local or huggingface repo
base_model: ./llama-7b-hf # local or huggingface repo
Note: The code will load the right architecture.
dataset
-datasets:
- # huggingface repo
- - path: vicgalle/alpaca-gpt4
- type: alpaca
-
- # huggingface repo with specific configuration/subset
- - path: EleutherAI/pile
- name: enron_emails
- type: completion # format from earlier
- field: text # Optional[str] default: text, field to use for completion data
-
- # huggingface repo with multiple named configurations/subsets
- - path: bigcode/commitpackft
- name:
- - ruby
- - python
- - typescript
- type: ... # unimplemented custom format
-
- # fastchat conversation
- # See 'conversation' options: https://github.com/lm-sys/FastChat/blob/main/fastchat/conversation.py
- - path: ...
- type: sharegpt
- conversation: chatml # default: vicuna_v1.1
-
- # local
- - path: data.jsonl # or json
- ds_type: json # see other options below
- type: alpaca
-
- # dataset with splits, but no train split
- - path: knowrohit07/know_sql
- type: context_qa.load_v2
- train_on_split: validation
-
- # loading from s3 or gcs
- # s3 creds will be loaded from the system default and gcs only supports public access
- - path: s3://path_to_ds # Accepts folder with arrow/parquet or file path like above. Supports s3, gcs.
- ...
-
- # Loading Data From a Public URL
- # - The file format is `json` (which includes `jsonl`) by default. For different formats, adjust the `ds_type` option accordingly.
- - path: https://some.url.com/yourdata.jsonl # The URL should be a direct link to the file you wish to load. URLs must use HTTPS protocol, not HTTP.
- ds_type: json # this is the default, see other options below.
datasets:
+ # huggingface repo
+ - path: vicgalle/alpaca-gpt4
+ type: alpaca
+
+ # huggingface repo with specific configuration/subset
+ - path: EleutherAI/pile
+ name: enron_emails
+ type: completion # format from earlier
+ field: text # Optional[str] default: text, field to use for completion data
+
+ # huggingface repo with multiple named configurations/subsets
+ - path: bigcode/commitpackft
+ name:
+ - ruby
+ - python
+ - typescript
+ type: ... # unimplemented custom format
+
+ # fastchat conversation
+ # See 'conversation' options: https://github.com/lm-sys/FastChat/blob/main/fastchat/conversation.py
+ - path: ...
+ type: sharegpt
+ conversation: chatml # default: vicuna_v1.1
+
+ # local
+ - path: data.jsonl # or json
+ ds_type: json # see other options below
+ type: alpaca
+
+ # dataset with splits, but no train split
+ - path: knowrohit07/know_sql
+ type: context_qa.load_v2
+ train_on_split: validation
+
+ # loading from s3 or gcs
+ # s3 creds will be loaded from the system default and gcs only supports public access
+ - path: s3://path_to_ds # Accepts folder with arrow/parquet or file path like above. Supports s3, gcs.
+ ...
+
+ # Loading Data From a Public URL
+ # - The file format is `json` (which includes `jsonl`) by default. For different formats, adjust the `ds_type` option accordingly.
+ - path: https://some.url.com/yourdata.jsonl # The URL should be a direct link to the file you wish to load. URLs must use HTTPS protocol, not HTTP.
+ ds_type: json # this is the default, see other options below.
loading
-load_in_4bit: true
-load_in_8bit: true
-
-bf16: auto # require >=ampere, auto will detect if your GPU supports this and choose automatically.
-fp16: # leave empty to use fp16 when bf16 is 'auto'. set to false if you want to fallback to fp32
-tf32: true # require >=ampere
-
-bfloat16: true # require >=ampere, use instead of bf16 when you don't want AMP (automatic mixed precision)
-float16: true # use instead of fp16 when you don't want AMP
load_in_4bit: true
+load_in_8bit: true
+
+bf16: auto # require >=ampere, auto will detect if your GPU supports this and choose automatically.
+fp16: # leave empty to use fp16 when bf16 is 'auto'. set to false if you want to fallback to fp32
+tf32: true # require >=ampere
+
+bfloat16: true # require >=ampere, use instead of bf16 when you don't want AMP (automatic mixed precision)
+float16: true # use instead of fp16 when you don't want AMP
Note: Repo does not do 4-bit quantization.
lora
-adapter: lora # 'qlora' or leave blank for full finetune
-lora_r: 8
-lora_alpha: 16
-lora_dropout: 0.05
-lora_target_modules:
- - q_proj
- - v_proj
adapter: lora # 'qlora' or leave blank for full finetune
+lora_r: 8
+lora_alpha: 16
+lora_dropout: 0.05
+lora_target_modules:
+ - q_proj
+ - v_proj
Run
-accelerate launch -m axolotl.cli.train your_config.yml
accelerate launch -m axolotl.cli.train your_config.yml
@@ -777,7 +805,7 @@[!TIP] You can also reference a config file that is hosted on a public URL, for example
accelerate launch -m axolotl.cli.train https://yourdomain.com/your_config.yml
push_dataset_to_hub: hf_user/repo
to push it to Huggingface.--debug
to see preprocessed examples.python -m axolotl.cli.preprocess your_config.yml
python -m axolotl.cli.preprocess your_config.yml
Deepspeed is an optimization suite for multi-gpu systems allowing you to train much larger models than you might typically be able to fit into your GPU’s VRAM. More information about the various optimization types for deepspeed is available at https://huggingface.co/docs/accelerate/main/en/usage_guides/deepspeed#what-is-integrated
We provide several default deepspeed JSON configurations for ZeRO stage 1, 2, and 3.
-deepspeed: deepspeed_configs/zero1.json
deepspeed: deepspeed_configs/zero1.json
accelerate launch -m axolotl.cli.train examples/llama-2/config.yml --deepspeed deepspeed_configs/zero1.json
fsdp:
- - full_shard
- - auto_wrap
-fsdp_config:
- fsdp_offload_params: true
- fsdp_state_dict_type: FULL_STATE_DICT
- fsdp_transformer_layer_cls_to_wrap: LlamaDecoderLayer
fsdp:
+ - full_shard
+ - auto_wrap
+fsdp_config:
+ fsdp_offload_params: true
+ fsdp_state_dict_type: FULL_STATE_DICT
+ fsdp_transformer_layer_cls_to_wrap: LlamaDecoderLayer
wandb_mode:
-wandb_project:
-wandb_entity:
-wandb_watch:
-wandb_name:
-wandb_log_model:
wandb_mode:
+wandb_project:
+wandb_entity:
+wandb_watch:
+wandb_name:
+wandb_log_model:
It is important to have special tokens like delimiters, end-of-sequence, beginning-of-sequence in your tokenizer’s vocabulary. This will help you avoid tokenization issues and help your model train better. You can do this in axolotl like this:
-special_tokens:
- bos_token: "<s>"
- eos_token: "</s>"
- unk_token: "<unk>"
-tokens: # these are delimiters
- - "<|im_start|>"
- - "<|im_end|>"
special_tokens:
+ bos_token: "<s>"
+ eos_token: "</s>"
+ unk_token: "<unk>"
+tokens: # these are delimiters
+ - "<|im_start|>"
+ - "<|im_end|>"
When you include these tokens in your axolotl config, axolotl adds these tokens to the tokenizer’s vocabulary.
Pass the appropriate flag to the inference command, depending upon what kind of model was trained:
Pretrained LORA:
-python -m axolotl.cli.inference examples/your_config.yml --lora_model_dir="./lora-output-dir"
python -m axolotl.cli.inference examples/your_config.yml --lora_model_dir="./lora-output-dir"
Full weights finetune:
-python -m axolotl.cli.inference examples/your_config.yml --base_model="./completed-model"
python -m axolotl.cli.inference examples/your_config.yml --base_model="./completed-model"
Full weights finetune w/ a prompt from a text file:
-cat /tmp/prompt.txt | python -m axolotl.cli.inference examples/your_config.yml \
---base_model="./completed-model" --prompter=None --load_in_8bit=True
cat /tmp/prompt.txt | python -m axolotl.cli.inference examples/your_config.yml \
+--base_model="./completed-model" --prompter=None --load_in_8bit=True
– With gradio hosting
-python -m axolotl.cli.inference examples/your_config.yml --gradio
python -m axolotl.cli.inference examples/your_config.yml --gradio
Please use --sample_packing False
if you have it on and receive the error similar to below:
@@ -856,9 +884,9 @@Inference Playground<
Merge LORA to base
The following command will merge your LORA adapater with your base model. You can optionally pass the argument
---lora_model_dir
to specify the directory where your LORA adapter was saved, otherwhise, this will be inferred fromoutput_dir
in your axolotl config file. The merged model is saved in the sub-directory{lora_model_dir}/merged
.+python3 -m axolotl.cli.merge_lora your_config.yml --lora_model_dir="./completed-model"
python3 -m axolotl.cli.merge_lora your_config.yml --lora_model_dir="./completed-model"
You may need to use the
-gpu_memory_limit
and/orlora_on_cpu
config options to avoid running out of memory. If you still run out of CUDA memory, you can try to merge in system RAM with+CUDA_VISIBLE_DEVICES="" python3 -m axolotl.cli.merge_lora ...
CUDA_VISIBLE_DEVICES="" python3 -m axolotl.cli.merge_lora ...
although this will be very slow, and using the config options above are recommended instead.
Building something cool with Axolotl? Consider adding a badge to your model card.
-[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
Bugs? Please check the open issues else create a new Issue.
PRs are greatly welcome!
Please run the quickstart instructions followed by the below to setup env:
-pip3 install -r requirements-dev.txt -r requirements-tests.txt
-pre-commit install
-
-# test
-pytest tests/
-
-# optional: run against all files
-pre-commit run --all-files
pip3 install -r requirements-dev.txt -r requirements-tests.txt
+pre-commit install
+
+# test
+pytest tests/
+
+# optional: run against all files
+pre-commit run --all-files
Thanks to all of our contributors to date. Help drive open source AI progress forward by contributing to Axolotl.