diff --git a/.nojekyll b/.nojekyll index a3b16d5d9..4e2bd18e5 100644 --- a/.nojekyll +++ b/.nojekyll @@ -1 +1 @@ -03cad029 \ No newline at end of file +f973d95d \ No newline at end of file diff --git a/docs/dataset-formats/index.html b/docs/dataset-formats/index.html index 1e7545553..7e783d96a 100644 --- a/docs/dataset-formats/index.html +++ b/docs/dataset-formats/index.html @@ -351,7 +351,7 @@

Dataset Formats

- + Pre-training @@ -359,7 +359,7 @@

Dataset Formats

Data format for a pre-training completion task. - + Instruction Tuning @@ -367,7 +367,7 @@

Dataset Formats

Instruction tuning formats for supervised fine-tuning. - + Conversation @@ -375,7 +375,7 @@

Dataset Formats

Conversation format for supervised fine-tuning. - + Template-Free @@ -383,7 +383,7 @@

Dataset Formats

Construct prompts without a template. - + Custom Pre-Tokenized Dataset diff --git a/index.html b/index.html index b5303fc6e..250c86afa 100644 --- a/index.html +++ b/index.html @@ -679,6 +679,34 @@

La # Managed spot (auto-recovery on preemption) HF_TOKEN=xx BUCKET=<unique-name> sky spot launch axolotl-spot.yaml --env HF_TOKEN --env BUCKET +
+

Launching on public clouds via dstack

+

To launch on GPU instance (both on-demand and spot instances) on public clouds (GCP, AWS, Azure, Lambda Labs, TensorDock, Vast.ai, and CUDO), you can use dstack.

+

Write a job description in YAML as below:

+
# dstack.yaml
+type: task
+
+image: winglian/axolotl-cloud:main-20240429-py3.11-cu121-2.2.1
+
+env:
+  - HUGGING_FACE_HUB_TOKEN
+  - WANDB_API_KEY
+
+commands:
+  - accelerate launch -m axolotl.cli.train config.yaml
+
+ports:
+  - 6006
+
+resources:
+  gpu:
+    memory: 24GB..
+    count: 2
+

then, simply run the job with dstack run command. Append --spot option if you want spot instance. dstack run command will show you the instance with cheapest price across multi cloud services:

+
pip install dstack
+HUGGING_FACE_HUB_TOKEN=xxx WANDB_API_KEY=xxx dstack run . -f dstack.yaml # --spot
+

For further and fine-grained use cases, please refer to the official dstack documents and the detailed description of axolotl example on the official repository.

+

Dataset

@@ -690,72 +718,72 @@

Config

See examples for quick start. It is recommended to duplicate and modify to your needs. The most important options are:

All Config Options

@@ -765,7 +793,7 @@

All Config Options

Train

Run

-
accelerate launch -m axolotl.cli.train your_config.yml
+
accelerate launch -m axolotl.cli.train your_config.yml

[!TIP] You can also reference a config file that is hosted on a public URL, for example accelerate launch -m axolotl.cli.train https://yourdomain.com/your_config.yml

@@ -777,7 +805,7 @@

Preprocess dataset

  • (Optional): Set push_dataset_to_hub: hf_user/repo to push it to Huggingface.
  • (Optional): Use --debug to see preprocessed examples.
  • -
    python -m axolotl.cli.preprocess your_config.yml
    +
    python -m axolotl.cli.preprocess your_config.yml

    Multi-GPU

    @@ -786,7 +814,7 @@

    Multi-GPU

    DeepSpeed

    Deepspeed is an optimization suite for multi-gpu systems allowing you to train much larger models than you might typically be able to fit into your GPU’s VRAM. More information about the various optimization types for deepspeed is available at https://huggingface.co/docs/accelerate/main/en/usage_guides/deepspeed#what-is-integrated

    We provide several default deepspeed JSON configurations for ZeRO stage 1, 2, and 3.

    -
    deepspeed: deepspeed_configs/zero1.json
    +
    deepspeed: deepspeed_configs/zero1.json
    accelerate launch -m axolotl.cli.train examples/llama-2/config.yml --deepspeed deepspeed_configs/zero1.json
    @@ -794,13 +822,13 @@
    FSDP
    • llama FSDP
    -
    fsdp:
    -  - full_shard
    -  - auto_wrap
    -fsdp_config:
    -  fsdp_offload_params: true
    -  fsdp_state_dict_type: FULL_STATE_DICT
    -  fsdp_transformer_layer_cls_to_wrap: LlamaDecoderLayer
    +
    fsdp:
    +  - full_shard
    +  - auto_wrap
    +fsdp_config:
    +  fsdp_offload_params: true
    +  fsdp_state_dict_type: FULL_STATE_DICT
    +  fsdp_transformer_layer_cls_to_wrap: LlamaDecoderLayer
    FSDP + QLoRA
    @@ -812,23 +840,23 @@
    Weights & Biase
    • wandb options
    -
    wandb_mode:
    -wandb_project:
    -wandb_entity:
    -wandb_watch:
    -wandb_name:
    -wandb_log_model:
    +
    wandb_mode:
    +wandb_project:
    +wandb_entity:
    +wandb_watch:
    +wandb_name:
    +wandb_log_model:
    Special Tokens

    It is important to have special tokens like delimiters, end-of-sequence, beginning-of-sequence in your tokenizer’s vocabulary. This will help you avoid tokenization issues and help your model train better. You can do this in axolotl like this:

    -
    special_tokens:
    -  bos_token: "<s>"
    -  eos_token: "</s>"
    -  unk_token: "<unk>"
    -tokens: # these are delimiters
    -  - "<|im_start|>"
    -  - "<|im_end|>"
    +
    special_tokens:
    +  bos_token: "<s>"
    +  eos_token: "</s>"
    +  unk_token: "<unk>"
    +tokens: # these are delimiters
    +  - "<|im_start|>"
    +  - "<|im_end|>"

    When you include these tokens in your axolotl config, axolotl adds these tokens to the tokenizer’s vocabulary.

    @@ -839,14 +867,14 @@

    Inference Playground<

    Pass the appropriate flag to the inference command, depending upon what kind of model was trained:

    • Pretrained LORA:

      -
      python -m axolotl.cli.inference examples/your_config.yml --lora_model_dir="./lora-output-dir"
    • +
      python -m axolotl.cli.inference examples/your_config.yml --lora_model_dir="./lora-output-dir"
    • Full weights finetune:

      -
      python -m axolotl.cli.inference examples/your_config.yml --base_model="./completed-model"
    • +
      python -m axolotl.cli.inference examples/your_config.yml --base_model="./completed-model"
    • Full weights finetune w/ a prompt from a text file:

      -
      cat /tmp/prompt.txt | python -m axolotl.cli.inference examples/your_config.yml \
      -  --base_model="./completed-model" --prompter=None --load_in_8bit=True
      +
      cat /tmp/prompt.txt | python -m axolotl.cli.inference examples/your_config.yml \
      +  --base_model="./completed-model" --prompter=None --load_in_8bit=True

      – With gradio hosting

      -
      python -m axolotl.cli.inference examples/your_config.yml --gradio
    • +
      python -m axolotl.cli.inference examples/your_config.yml --gradio

    Please use --sample_packing False if you have it on and receive the error similar to below:

    @@ -856,9 +884,9 @@

    Inference Playground<

    Merge LORA to base

    The following command will merge your LORA adapater with your base model. You can optionally pass the argument --lora_model_dir to specify the directory where your LORA adapter was saved, otherwhise, this will be inferred from output_dir in your axolotl config file. The merged model is saved in the sub-directory {lora_model_dir}/merged.

    -
    python3 -m axolotl.cli.merge_lora your_config.yml --lora_model_dir="./completed-model"
    +
    python3 -m axolotl.cli.merge_lora your_config.yml --lora_model_dir="./completed-model"

    You may need to use the gpu_memory_limit and/or lora_on_cpu config options to avoid running out of memory. If you still run out of CUDA memory, you can try to merge in system RAM with

    -
    CUDA_VISIBLE_DEVICES="" python3 -m axolotl.cli.merge_lora ...
    +
    CUDA_VISIBLE_DEVICES="" python3 -m axolotl.cli.merge_lora ...

    although this will be very slow, and using the config options above are recommended instead.

    @@ -916,7 +944,7 @@

    Need help? 🙋

    Badge ❤🏷️

    Building something cool with Axolotl? Consider adding a badge to your model card.

    -
    [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
    +
    [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)

    Built with Axolotl

    @@ -931,14 +959,14 @@

    Contributing 🤝

    Bugs? Please check the open issues else create a new Issue.

    PRs are greatly welcome!

    Please run the quickstart instructions followed by the below to setup env:

    -
    pip3 install -r requirements-dev.txt -r requirements-tests.txt
    -pre-commit install
    -
    -# test
    -pytest tests/
    -
    -# optional: run against all files
    -pre-commit run --all-files
    +
    pip3 install -r requirements-dev.txt -r requirements-tests.txt
    +pre-commit install
    +
    +# test
    +pytest tests/
    +
    +# optional: run against all files
    +pre-commit run --all-files

    Thanks to all of our contributors to date. Help drive open source AI progress forward by contributing to Axolotl.

    contributor chart by https://contrib.rocks

    diff --git a/search.json b/search.json index 5fbc33c47..bd1bc4cad 100644 --- a/search.json +++ b/search.json @@ -34,7 +34,7 @@ "href": "index.html#advanced-setup", "title": "Axolotl", "section": "Advanced Setup", - "text": "Advanced Setup\n\nEnvironment\n\nDocker\ndocker run --gpus '\"all\"' --rm -it winglian/axolotl:main-latest\nOr run on the current files for development:\ndocker compose up -d\n\n[!Tip] If you want to debug axolotl or prefer to use Docker as your development environment, see the debugging guide’s section on Docker.\n\n\n\nDocker advanced\n\nA more powerful Docker command to run would be this:\ndocker run --privileged --gpus '\"all\"' --shm-size 10g --rm -it --name axolotl --ipc=host --ulimit memlock=-1 --ulimit stack=67108864 --mount type=bind,src=\"${PWD}\",target=/workspace/axolotl -v ${HOME}/.cache/huggingface:/root/.cache/huggingface winglian/axolotl:main-latest\nIt additionally: * Prevents memory issues when running e.g. deepspeed (e.g. you could hit SIGBUS/signal 7 error) through --ipc and --ulimit args. * Persists the downloaded HF data (models etc.) and your modifications to axolotl code through --mount/-v args. * The --name argument simply makes it easier to refer to the container in vscode (Dev Containers: Attach to Running Container...) or in your terminal. * The --privileged flag gives all capabilities to the container. * The --shm-size 10g argument increases the shared memory size. Use this if you see exitcode: -7 errors using deepspeed.\nMore information on nvidia website\n\n\n\nConda/Pip venv\n\nInstall python >=3.10\nInstall pytorch stable https://pytorch.org/get-started/locally/\nInstall Axolotl along with python dependencies bash pip3 install packaging pip3 install -e '.[flash-attn,deepspeed]'\n(Optional) Login to Huggingface to use gated models/datasets. bash huggingface-cli login Get the token at huggingface.co/settings/tokens\n\n\n\nCloud GPU\nFor cloud GPU providers that support docker images, use winglian/axolotl-cloud:main-latest\n\non Latitude.sh use this direct link\non JarvisLabs.ai use this direct link\non RunPod use this direct link\n\n\n\nBare Metal Cloud GPU\n\nLambdaLabs\n\n\nClick to Expand\n\n\nInstall python\n\nsudo apt update\nsudo apt install -y python3.10\n\nsudo update-alternatives --install /usr/bin/python python /usr/bin/python3.10 1\nsudo update-alternatives --config python # pick 3.10 if given option\npython -V # should be 3.10\n\nInstall pip\n\nwget https://bootstrap.pypa.io/get-pip.py\npython get-pip.py\n\nInstall Pytorch https://pytorch.org/get-started/locally/\nFollow instructions on quickstart.\nRun\n\npip3 install protobuf==3.20.3\npip3 install -U --ignore-installed requests Pillow psutil scipy\n\nSet path\n\nexport LD_LIBRARY_PATH=/usr/lib/x86_64-linux-gnu:$LD_LIBRARY_PATH\n\n\n\nGCP\n\n\nClick to Expand\n\nUse a Deeplearning linux OS with cuda and pytorch installed. Then follow instructions on quickstart.\nMake sure to run the below to uninstall xla.\npip uninstall -y torch_xla[tpu]\n\n\n\n\nWindows\nPlease use WSL or Docker!\n\n\nMac\nUse the below instead of the install method in QuickStart.\npip3 install -e '.'\nMore info: mac.md\n\n\nGoogle Colab\nPlease use this example notebook.\n\n\nLaunching on public clouds via SkyPilot\nTo launch on GPU instances (both on-demand and spot instances) on 7+ clouds (GCP, AWS, Azure, OCI, and more), you can use SkyPilot:\npip install \"skypilot-nightly[gcp,aws,azure,oci,lambda,kubernetes,ibm,scp]\" # choose your clouds\nsky check\nGet the example YAMLs of using Axolotl to finetune mistralai/Mistral-7B-v0.1:\ngit clone https://github.com/skypilot-org/skypilot.git\ncd skypilot/llm/axolotl\nUse one command to launch:\n# On-demand\nHF_TOKEN=xx sky launch axolotl.yaml --env HF_TOKEN\n\n# Managed spot (auto-recovery on preemption)\nHF_TOKEN=xx BUCKET=<unique-name> sky spot launch axolotl-spot.yaml --env HF_TOKEN --env BUCKET\n\n\n\nDataset\nAxolotl supports a variety of dataset formats. It is recommended to use a JSONL. The schema of the JSONL depends upon the task and the prompt template you wish to use. Instead of a JSONL, you can also use a HuggingFace dataset with columns for each JSONL field.\nSee these docs for more information on how to use different dataset formats.\n\n\nConfig\nSee examples for quick start. It is recommended to duplicate and modify to your needs. The most important options are:\n\nmodel\nbase_model: ./llama-7b-hf # local or huggingface repo\nNote: The code will load the right architecture.\ndataset\ndatasets:\n # huggingface repo\n - path: vicgalle/alpaca-gpt4\n type: alpaca\n\n # huggingface repo with specific configuration/subset\n - path: EleutherAI/pile\n name: enron_emails\n type: completion # format from earlier\n field: text # Optional[str] default: text, field to use for completion data\n\n # huggingface repo with multiple named configurations/subsets\n - path: bigcode/commitpackft\n name:\n - ruby\n - python\n - typescript\n type: ... # unimplemented custom format\n\n # fastchat conversation\n # See 'conversation' options: https://github.com/lm-sys/FastChat/blob/main/fastchat/conversation.py\n - path: ...\n type: sharegpt\n conversation: chatml # default: vicuna_v1.1\n\n # local\n - path: data.jsonl # or json\n ds_type: json # see other options below\n type: alpaca\n\n # dataset with splits, but no train split\n - path: knowrohit07/know_sql\n type: context_qa.load_v2\n train_on_split: validation\n\n # loading from s3 or gcs\n # s3 creds will be loaded from the system default and gcs only supports public access\n - path: s3://path_to_ds # Accepts folder with arrow/parquet or file path like above. Supports s3, gcs.\n ...\n\n # Loading Data From a Public URL\n # - The file format is `json` (which includes `jsonl`) by default. For different formats, adjust the `ds_type` option accordingly.\n - path: https://some.url.com/yourdata.jsonl # The URL should be a direct link to the file you wish to load. URLs must use HTTPS protocol, not HTTP.\n ds_type: json # this is the default, see other options below.\nloading\nload_in_4bit: true\nload_in_8bit: true\n\nbf16: auto # require >=ampere, auto will detect if your GPU supports this and choose automatically.\nfp16: # leave empty to use fp16 when bf16 is 'auto'. set to false if you want to fallback to fp32\ntf32: true # require >=ampere\n\nbfloat16: true # require >=ampere, use instead of bf16 when you don't want AMP (automatic mixed precision)\nfloat16: true # use instead of fp16 when you don't want AMP\nNote: Repo does not do 4-bit quantization.\nlora\nadapter: lora # 'qlora' or leave blank for full finetune\nlora_r: 8\nlora_alpha: 16\nlora_dropout: 0.05\nlora_target_modules:\n - q_proj\n - v_proj\n\n\nAll Config Options\nSee these docs for all config options.\n\n\n\nTrain\nRun\naccelerate launch -m axolotl.cli.train your_config.yml\n\n[!TIP] You can also reference a config file that is hosted on a public URL, for example accelerate launch -m axolotl.cli.train https://yourdomain.com/your_config.yml\n\n\nPreprocess dataset\nYou can optionally pre-tokenize dataset with the following before finetuning. This is recommended for large datasets.\n\nSet dataset_prepared_path: to a local folder for saving and loading pre-tokenized dataset.\n(Optional): Set push_dataset_to_hub: hf_user/repo to push it to Huggingface.\n(Optional): Use --debug to see preprocessed examples.\n\npython -m axolotl.cli.preprocess your_config.yml\n\n\nMulti-GPU\nBelow are the options available in axolotl for training with multiple GPUs. Note that DeepSpeed is the recommended multi-GPU option currently because FSDP may experience loss instability.\n\nDeepSpeed\nDeepspeed is an optimization suite for multi-gpu systems allowing you to train much larger models than you might typically be able to fit into your GPU’s VRAM. More information about the various optimization types for deepspeed is available at https://huggingface.co/docs/accelerate/main/en/usage_guides/deepspeed#what-is-integrated\nWe provide several default deepspeed JSON configurations for ZeRO stage 1, 2, and 3.\ndeepspeed: deepspeed_configs/zero1.json\naccelerate launch -m axolotl.cli.train examples/llama-2/config.yml --deepspeed deepspeed_configs/zero1.json\n\n\nFSDP\n\nllama FSDP\n\nfsdp:\n - full_shard\n - auto_wrap\nfsdp_config:\n fsdp_offload_params: true\n fsdp_state_dict_type: FULL_STATE_DICT\n fsdp_transformer_layer_cls_to_wrap: LlamaDecoderLayer\n\n\nFSDP + QLoRA\nAxolotl supports training with FSDP and QLoRA, see these docs for more information.\n\n\nWeights & Biases Logging\nMake sure your WANDB_API_KEY environment variable is set (recommended) or you login to wandb with wandb login.\n\nwandb options\n\nwandb_mode:\nwandb_project:\nwandb_entity:\nwandb_watch:\nwandb_name:\nwandb_log_model:\n\n\nSpecial Tokens\nIt is important to have special tokens like delimiters, end-of-sequence, beginning-of-sequence in your tokenizer’s vocabulary. This will help you avoid tokenization issues and help your model train better. You can do this in axolotl like this:\nspecial_tokens:\n bos_token: \"<s>\"\n eos_token: \"</s>\"\n unk_token: \"<unk>\"\ntokens: # these are delimiters\n - \"<|im_start|>\"\n - \"<|im_end|>\"\nWhen you include these tokens in your axolotl config, axolotl adds these tokens to the tokenizer’s vocabulary.\n\n\n\n\nInference Playground\nAxolotl allows you to load your model in an interactive terminal playground for quick experimentation. The config file is the same config file used for training.\nPass the appropriate flag to the inference command, depending upon what kind of model was trained:\n\nPretrained LORA:\npython -m axolotl.cli.inference examples/your_config.yml --lora_model_dir=\"./lora-output-dir\"\nFull weights finetune:\npython -m axolotl.cli.inference examples/your_config.yml --base_model=\"./completed-model\"\nFull weights finetune w/ a prompt from a text file:\ncat /tmp/prompt.txt | python -m axolotl.cli.inference examples/your_config.yml \\\n --base_model=\"./completed-model\" --prompter=None --load_in_8bit=True\n– With gradio hosting\npython -m axolotl.cli.inference examples/your_config.yml --gradio\n\nPlease use --sample_packing False if you have it on and receive the error similar to below:\n\nRuntimeError: stack expects each tensor to be equal size, but got [1, 32, 1, 128] at entry 0 and [1, 32, 8, 128] at entry 1\n\n\n\nMerge LORA to base\nThe following command will merge your LORA adapater with your base model. You can optionally pass the argument --lora_model_dir to specify the directory where your LORA adapter was saved, otherwhise, this will be inferred from output_dir in your axolotl config file. The merged model is saved in the sub-directory {lora_model_dir}/merged.\npython3 -m axolotl.cli.merge_lora your_config.yml --lora_model_dir=\"./completed-model\"\nYou may need to use the gpu_memory_limit and/or lora_on_cpu config options to avoid running out of memory. If you still run out of CUDA memory, you can try to merge in system RAM with\nCUDA_VISIBLE_DEVICES=\"\" python3 -m axolotl.cli.merge_lora ...\nalthough this will be very slow, and using the config options above are recommended instead.", + "text": "Advanced Setup\n\nEnvironment\n\nDocker\ndocker run --gpus '\"all\"' --rm -it winglian/axolotl:main-latest\nOr run on the current files for development:\ndocker compose up -d\n\n[!Tip] If you want to debug axolotl or prefer to use Docker as your development environment, see the debugging guide’s section on Docker.\n\n\n\nDocker advanced\n\nA more powerful Docker command to run would be this:\ndocker run --privileged --gpus '\"all\"' --shm-size 10g --rm -it --name axolotl --ipc=host --ulimit memlock=-1 --ulimit stack=67108864 --mount type=bind,src=\"${PWD}\",target=/workspace/axolotl -v ${HOME}/.cache/huggingface:/root/.cache/huggingface winglian/axolotl:main-latest\nIt additionally: * Prevents memory issues when running e.g. deepspeed (e.g. you could hit SIGBUS/signal 7 error) through --ipc and --ulimit args. * Persists the downloaded HF data (models etc.) and your modifications to axolotl code through --mount/-v args. * The --name argument simply makes it easier to refer to the container in vscode (Dev Containers: Attach to Running Container...) or in your terminal. * The --privileged flag gives all capabilities to the container. * The --shm-size 10g argument increases the shared memory size. Use this if you see exitcode: -7 errors using deepspeed.\nMore information on nvidia website\n\n\n\nConda/Pip venv\n\nInstall python >=3.10\nInstall pytorch stable https://pytorch.org/get-started/locally/\nInstall Axolotl along with python dependencies bash pip3 install packaging pip3 install -e '.[flash-attn,deepspeed]'\n(Optional) Login to Huggingface to use gated models/datasets. bash huggingface-cli login Get the token at huggingface.co/settings/tokens\n\n\n\nCloud GPU\nFor cloud GPU providers that support docker images, use winglian/axolotl-cloud:main-latest\n\non Latitude.sh use this direct link\non JarvisLabs.ai use this direct link\non RunPod use this direct link\n\n\n\nBare Metal Cloud GPU\n\nLambdaLabs\n\n\nClick to Expand\n\n\nInstall python\n\nsudo apt update\nsudo apt install -y python3.10\n\nsudo update-alternatives --install /usr/bin/python python /usr/bin/python3.10 1\nsudo update-alternatives --config python # pick 3.10 if given option\npython -V # should be 3.10\n\nInstall pip\n\nwget https://bootstrap.pypa.io/get-pip.py\npython get-pip.py\n\nInstall Pytorch https://pytorch.org/get-started/locally/\nFollow instructions on quickstart.\nRun\n\npip3 install protobuf==3.20.3\npip3 install -U --ignore-installed requests Pillow psutil scipy\n\nSet path\n\nexport LD_LIBRARY_PATH=/usr/lib/x86_64-linux-gnu:$LD_LIBRARY_PATH\n\n\n\nGCP\n\n\nClick to Expand\n\nUse a Deeplearning linux OS with cuda and pytorch installed. Then follow instructions on quickstart.\nMake sure to run the below to uninstall xla.\npip uninstall -y torch_xla[tpu]\n\n\n\n\nWindows\nPlease use WSL or Docker!\n\n\nMac\nUse the below instead of the install method in QuickStart.\npip3 install -e '.'\nMore info: mac.md\n\n\nGoogle Colab\nPlease use this example notebook.\n\n\nLaunching on public clouds via SkyPilot\nTo launch on GPU instances (both on-demand and spot instances) on 7+ clouds (GCP, AWS, Azure, OCI, and more), you can use SkyPilot:\npip install \"skypilot-nightly[gcp,aws,azure,oci,lambda,kubernetes,ibm,scp]\" # choose your clouds\nsky check\nGet the example YAMLs of using Axolotl to finetune mistralai/Mistral-7B-v0.1:\ngit clone https://github.com/skypilot-org/skypilot.git\ncd skypilot/llm/axolotl\nUse one command to launch:\n# On-demand\nHF_TOKEN=xx sky launch axolotl.yaml --env HF_TOKEN\n\n# Managed spot (auto-recovery on preemption)\nHF_TOKEN=xx BUCKET=<unique-name> sky spot launch axolotl-spot.yaml --env HF_TOKEN --env BUCKET\n\n\nLaunching on public clouds via dstack\nTo launch on GPU instance (both on-demand and spot instances) on public clouds (GCP, AWS, Azure, Lambda Labs, TensorDock, Vast.ai, and CUDO), you can use dstack.\nWrite a job description in YAML as below:\n# dstack.yaml\ntype: task\n\nimage: winglian/axolotl-cloud:main-20240429-py3.11-cu121-2.2.1\n\nenv:\n - HUGGING_FACE_HUB_TOKEN\n - WANDB_API_KEY\n\ncommands:\n - accelerate launch -m axolotl.cli.train config.yaml\n\nports:\n - 6006\n\nresources:\n gpu:\n memory: 24GB..\n count: 2\nthen, simply run the job with dstack run command. Append --spot option if you want spot instance. dstack run command will show you the instance with cheapest price across multi cloud services:\npip install dstack\nHUGGING_FACE_HUB_TOKEN=xxx WANDB_API_KEY=xxx dstack run . -f dstack.yaml # --spot\nFor further and fine-grained use cases, please refer to the official dstack documents and the detailed description of axolotl example on the official repository.\n\n\n\nDataset\nAxolotl supports a variety of dataset formats. It is recommended to use a JSONL. The schema of the JSONL depends upon the task and the prompt template you wish to use. Instead of a JSONL, you can also use a HuggingFace dataset with columns for each JSONL field.\nSee these docs for more information on how to use different dataset formats.\n\n\nConfig\nSee examples for quick start. It is recommended to duplicate and modify to your needs. The most important options are:\n\nmodel\nbase_model: ./llama-7b-hf # local or huggingface repo\nNote: The code will load the right architecture.\ndataset\ndatasets:\n # huggingface repo\n - path: vicgalle/alpaca-gpt4\n type: alpaca\n\n # huggingface repo with specific configuration/subset\n - path: EleutherAI/pile\n name: enron_emails\n type: completion # format from earlier\n field: text # Optional[str] default: text, field to use for completion data\n\n # huggingface repo with multiple named configurations/subsets\n - path: bigcode/commitpackft\n name:\n - ruby\n - python\n - typescript\n type: ... # unimplemented custom format\n\n # fastchat conversation\n # See 'conversation' options: https://github.com/lm-sys/FastChat/blob/main/fastchat/conversation.py\n - path: ...\n type: sharegpt\n conversation: chatml # default: vicuna_v1.1\n\n # local\n - path: data.jsonl # or json\n ds_type: json # see other options below\n type: alpaca\n\n # dataset with splits, but no train split\n - path: knowrohit07/know_sql\n type: context_qa.load_v2\n train_on_split: validation\n\n # loading from s3 or gcs\n # s3 creds will be loaded from the system default and gcs only supports public access\n - path: s3://path_to_ds # Accepts folder with arrow/parquet or file path like above. Supports s3, gcs.\n ...\n\n # Loading Data From a Public URL\n # - The file format is `json` (which includes `jsonl`) by default. For different formats, adjust the `ds_type` option accordingly.\n - path: https://some.url.com/yourdata.jsonl # The URL should be a direct link to the file you wish to load. URLs must use HTTPS protocol, not HTTP.\n ds_type: json # this is the default, see other options below.\nloading\nload_in_4bit: true\nload_in_8bit: true\n\nbf16: auto # require >=ampere, auto will detect if your GPU supports this and choose automatically.\nfp16: # leave empty to use fp16 when bf16 is 'auto'. set to false if you want to fallback to fp32\ntf32: true # require >=ampere\n\nbfloat16: true # require >=ampere, use instead of bf16 when you don't want AMP (automatic mixed precision)\nfloat16: true # use instead of fp16 when you don't want AMP\nNote: Repo does not do 4-bit quantization.\nlora\nadapter: lora # 'qlora' or leave blank for full finetune\nlora_r: 8\nlora_alpha: 16\nlora_dropout: 0.05\nlora_target_modules:\n - q_proj\n - v_proj\n\n\nAll Config Options\nSee these docs for all config options.\n\n\n\nTrain\nRun\naccelerate launch -m axolotl.cli.train your_config.yml\n\n[!TIP] You can also reference a config file that is hosted on a public URL, for example accelerate launch -m axolotl.cli.train https://yourdomain.com/your_config.yml\n\n\nPreprocess dataset\nYou can optionally pre-tokenize dataset with the following before finetuning. This is recommended for large datasets.\n\nSet dataset_prepared_path: to a local folder for saving and loading pre-tokenized dataset.\n(Optional): Set push_dataset_to_hub: hf_user/repo to push it to Huggingface.\n(Optional): Use --debug to see preprocessed examples.\n\npython -m axolotl.cli.preprocess your_config.yml\n\n\nMulti-GPU\nBelow are the options available in axolotl for training with multiple GPUs. Note that DeepSpeed is the recommended multi-GPU option currently because FSDP may experience loss instability.\n\nDeepSpeed\nDeepspeed is an optimization suite for multi-gpu systems allowing you to train much larger models than you might typically be able to fit into your GPU’s VRAM. More information about the various optimization types for deepspeed is available at https://huggingface.co/docs/accelerate/main/en/usage_guides/deepspeed#what-is-integrated\nWe provide several default deepspeed JSON configurations for ZeRO stage 1, 2, and 3.\ndeepspeed: deepspeed_configs/zero1.json\naccelerate launch -m axolotl.cli.train examples/llama-2/config.yml --deepspeed deepspeed_configs/zero1.json\n\n\nFSDP\n\nllama FSDP\n\nfsdp:\n - full_shard\n - auto_wrap\nfsdp_config:\n fsdp_offload_params: true\n fsdp_state_dict_type: FULL_STATE_DICT\n fsdp_transformer_layer_cls_to_wrap: LlamaDecoderLayer\n\n\nFSDP + QLoRA\nAxolotl supports training with FSDP and QLoRA, see these docs for more information.\n\n\nWeights & Biases Logging\nMake sure your WANDB_API_KEY environment variable is set (recommended) or you login to wandb with wandb login.\n\nwandb options\n\nwandb_mode:\nwandb_project:\nwandb_entity:\nwandb_watch:\nwandb_name:\nwandb_log_model:\n\n\nSpecial Tokens\nIt is important to have special tokens like delimiters, end-of-sequence, beginning-of-sequence in your tokenizer’s vocabulary. This will help you avoid tokenization issues and help your model train better. You can do this in axolotl like this:\nspecial_tokens:\n bos_token: \"<s>\"\n eos_token: \"</s>\"\n unk_token: \"<unk>\"\ntokens: # these are delimiters\n - \"<|im_start|>\"\n - \"<|im_end|>\"\nWhen you include these tokens in your axolotl config, axolotl adds these tokens to the tokenizer’s vocabulary.\n\n\n\n\nInference Playground\nAxolotl allows you to load your model in an interactive terminal playground for quick experimentation. The config file is the same config file used for training.\nPass the appropriate flag to the inference command, depending upon what kind of model was trained:\n\nPretrained LORA:\npython -m axolotl.cli.inference examples/your_config.yml --lora_model_dir=\"./lora-output-dir\"\nFull weights finetune:\npython -m axolotl.cli.inference examples/your_config.yml --base_model=\"./completed-model\"\nFull weights finetune w/ a prompt from a text file:\ncat /tmp/prompt.txt | python -m axolotl.cli.inference examples/your_config.yml \\\n --base_model=\"./completed-model\" --prompter=None --load_in_8bit=True\n– With gradio hosting\npython -m axolotl.cli.inference examples/your_config.yml --gradio\n\nPlease use --sample_packing False if you have it on and receive the error similar to below:\n\nRuntimeError: stack expects each tensor to be equal size, but got [1, 32, 1, 128] at entry 0 and [1, 32, 8, 128] at entry 1\n\n\n\nMerge LORA to base\nThe following command will merge your LORA adapater with your base model. You can optionally pass the argument --lora_model_dir to specify the directory where your LORA adapter was saved, otherwhise, this will be inferred from output_dir in your axolotl config file. The merged model is saved in the sub-directory {lora_model_dir}/merged.\npython3 -m axolotl.cli.merge_lora your_config.yml --lora_model_dir=\"./completed-model\"\nYou may need to use the gpu_memory_limit and/or lora_on_cpu config options to avoid running out of memory. If you still run out of CUDA memory, you can try to merge in system RAM with\nCUDA_VISIBLE_DEVICES=\"\" python3 -m axolotl.cli.merge_lora ...\nalthough this will be very slow, and using the config options above are recommended instead.", "crumbs": [ "Home" ] diff --git a/sitemap.xml b/sitemap.xml index 991598600..5021465fc 100644 --- a/sitemap.xml +++ b/sitemap.xml @@ -2,90 +2,90 @@ https://OpenAccess-AI-Collective.github.io/axolotl/index.html - 2024-05-11T22:29:14.767Z + 2024-05-14T12:17:44.902Z https://OpenAccess-AI-Collective.github.io/axolotl/TODO.html - 2024-05-11T22:29:14.755Z + 2024-05-14T12:17:44.882Z https://OpenAccess-AI-Collective.github.io/axolotl/docs/multi-node.html - 2024-05-11T22:29:14.755Z + 2024-05-14T12:17:44.886Z https://OpenAccess-AI-Collective.github.io/axolotl/docs/rlhf.html - 2024-05-11T22:29:14.755Z + 2024-05-14T12:17:44.886Z https://OpenAccess-AI-Collective.github.io/axolotl/docs/nccl.html - 2024-05-11T22:29:14.755Z + 2024-05-14T12:17:44.886Z https://OpenAccess-AI-Collective.github.io/axolotl/docs/multipack.html - 2024-05-11T22:29:14.755Z + 2024-05-14T12:17:44.886Z https://OpenAccess-AI-Collective.github.io/axolotl/docs/dataset-formats/tokenized.html - 2024-05-11T22:29:14.755Z + 2024-05-14T12:17:44.886Z https://OpenAccess-AI-Collective.github.io/axolotl/docs/dataset-formats/inst_tune.html - 2024-05-11T22:29:14.755Z + 2024-05-14T12:17:44.882Z https://OpenAccess-AI-Collective.github.io/axolotl/docs/dataset-formats/conversation.html - 2024-05-11T22:29:14.755Z + 2024-05-14T12:17:44.882Z https://OpenAccess-AI-Collective.github.io/axolotl/docs/batch_vs_grad.html - 2024-05-11T22:29:14.755Z + 2024-05-14T12:17:44.882Z https://OpenAccess-AI-Collective.github.io/axolotl/docs/input_output.html - 2024-05-11T22:29:14.755Z + 2024-05-14T12:17:44.886Z https://OpenAccess-AI-Collective.github.io/axolotl/docs/faq.html - 2024-05-11T22:29:14.755Z + 2024-05-14T12:17:44.886Z https://OpenAccess-AI-Collective.github.io/axolotl/docs/dataset_preprocessing.html - 2024-05-11T22:29:14.755Z + 2024-05-14T12:17:44.886Z https://OpenAccess-AI-Collective.github.io/axolotl/docs/dataset-formats/template_free.html - 2024-05-11T22:29:14.755Z + 2024-05-14T12:17:44.882Z https://OpenAccess-AI-Collective.github.io/axolotl/docs/dataset-formats/pretraining.html - 2024-05-11T22:29:14.755Z + 2024-05-14T12:17:44.882Z https://OpenAccess-AI-Collective.github.io/axolotl/docs/dataset-formats/index.html - 2024-05-11T22:29:14.755Z + 2024-05-14T12:17:44.882Z https://OpenAccess-AI-Collective.github.io/axolotl/docs/mac.html - 2024-05-11T22:29:14.755Z + 2024-05-14T12:17:44.886Z https://OpenAccess-AI-Collective.github.io/axolotl/docs/config.html - 2024-05-11T22:29:14.755Z + 2024-05-14T12:17:44.882Z https://OpenAccess-AI-Collective.github.io/axolotl/docs/debugging.html - 2024-05-11T22:29:14.755Z + 2024-05-14T12:17:44.886Z https://OpenAccess-AI-Collective.github.io/axolotl/docs/fsdp_qlora.html - 2024-05-11T22:29:14.755Z + 2024-05-14T12:17:44.886Z https://OpenAccess-AI-Collective.github.io/axolotl/examples/colab-notebooks/colab-axolotl-example.html - 2024-05-11T22:29:14.755Z + 2024-05-14T12:17:44.886Z https://OpenAccess-AI-Collective.github.io/axolotl/FAQS.html - 2024-05-11T22:29:14.751Z + 2024-05-14T12:17:44.882Z