Skip to content

Commit b13172a

Browse files
authored
Merge pull request #48 from NanoCode012/feat/update-readme
Feat: Minor update readme from dev changes
2 parents 5e6baab + 57ee1b8 commit b13172a

File tree

1 file changed

+59
-18
lines changed

1 file changed

+59
-18
lines changed

README.md

+59-18
Original file line numberDiff line numberDiff line change
@@ -97,6 +97,18 @@ Have dataset(s) in one of the following format (JSONL recommended):
9797
```json
9898
{"instruction": "...", "input": "...", "output": "...", "reflection": "...", "corrected": "..."}
9999
```
100+
- `explainchoice`: question, choices, (solution OR explanation)
101+
```json
102+
{"question": "...", "choices": ["..."], "solution": "...", "explanation": "..."}
103+
```
104+
- `concisechoice`: question, choices, (solution OR explanation)
105+
```json
106+
{"question": "...", "choices": ["..."], "solution": "...", "explanation": "..."}
107+
```
108+
- `summarizetldr`: article and summary
109+
```json
110+
{"article": "...", "summary": "..."}
111+
```
100112

101113
> Have some new format to propose? Check if it's already defined in [data.py](src/axolotl/utils/data.py) in `dev` branch!
102114
@@ -124,17 +136,17 @@ See sample configs in [configs](configs) folder or [examples](examples) for quic
124136
125137
- loading
126138
```yaml
127-
load_4bit: true
139+
load_in_4bit: true
128140
load_in_8bit: true
129-
bf16: true
141+
bf16: true # require >=ampere
130142
fp16: true
131-
tf32: true
143+
tf32: true # require >=ampere
132144
```
133145
Note: Repo does not do 4-bit quantization.
134146
135147
- lora
136148
```yaml
137-
adapter: lora # blank for full finetune
149+
adapter: lora # qlora or leave blank for full finetune
138150
lora_r: 8
139151
lora_alpha: 16
140152
lora_dropout: 0.05
@@ -163,28 +175,32 @@ tokenizer_type: AutoTokenizer
163175
# Trust remote code for untrusted source
164176
trust_remote_code:
165177

166-
# whether you are training a 4-bit quantized model
178+
# whether you are training a 4-bit GPTQ quantized model
167179
load_4bit: true
168180
gptq_groupsize: 128 # group size
169181
gptq_model_v1: false # v1 or v2
170182

171183
# this will attempt to quantize the model down to 8 bits and use adam 8 bit optimizer
172184
load_in_8bit: true
185+
# use bitsandbytes 4 bit
186+
load_in_4bit:
173187

174188
# Use CUDA bf16
175-
bf16: true
189+
bf16: true # bool or 'full' for `bf16_full_eval`. require >=ampere
176190
# Use CUDA fp16
177191
fp16: true
178192
# Use CUDA tf32
179-
tf32: true
193+
tf32: true # require >=ampere
180194

181195
# a list of one or more datasets to finetune the model with
182196
datasets:
183197
# this can be either a hf dataset, or relative path
184198
- path: vicgalle/alpaca-gpt4
185199
# The type of prompt to use for training. [alpaca, sharegpt, gpteacher, oasst, reflection]
186-
type: alpaca
200+
type: alpaca # format OR format:prompt_style (chat/instruct)
187201
data_files: # path to source data files
202+
shards: # true if use subset data. make sure to set `shards` param also
203+
shards: # number of shards to split dataset into
188204

189205
# axolotl attempts to save the dataset as an arrow after packing the data together so
190206
# subsequent training attempts load faster, relative path
@@ -201,7 +217,7 @@ sequence_len: 2048
201217
# inspired by StackLLaMA. see https://huggingface.co/blog/stackllama#supervised-fine-tuning
202218
max_packed_sequence_len: 1024
203219

204-
# if you want to use lora, leave blank to train all parameters in original model
220+
# if you want to use 'lora' or 'qlora' or leave blank to train all parameters in original model
205221
adapter: lora
206222
# if you already have a lora model trained that you want to load, put that here
207223
# lora hyperparameters
@@ -224,6 +240,7 @@ lora_out_dir:
224240
lora_fan_in_fan_out: false
225241

226242
# wandb configuration if you're using it
243+
wandb_mode:
227244
wandb_project:
228245
wandb_watch:
229246
wandb_run_id:
@@ -252,8 +269,18 @@ gradient_checkpointing: false
252269
# stop training after this many evaluation losses have increased in a row
253270
# https://huggingface.co/transformers/v4.2.2/_modules/transformers/trainer_callback.html#EarlyStoppingCallback
254271
early_stopping_patience: 3
255-
# specify a scheduler to use with the optimizer. only one_cycle is supported currently
256-
lr_scheduler:
272+
273+
# specify a scheduler and kwargs to use with the optimizer
274+
lr_scheduler: # 'one_cycle' | 'log_sweep' | empty for cosine
275+
lr_scheduler_kwargs:
276+
277+
# for one_cycle optim
278+
lr_div_factor: # learning rate div factor
279+
280+
# for log_sweep optim
281+
log_sweep_min_lr:
282+
log_sweep_max_lr:
283+
257284
# specify optimizer
258285
optimizer:
259286
# specify weight decay
@@ -262,7 +289,7 @@ weight_decay:
262289
# whether to use xformers attention patch https://github.com/facebookresearch/xformers:
263290
xformers_attention:
264291
# whether to use flash attention patch https://github.com/HazyResearch/flash-attention:
265-
flash_attention:
292+
flash_attention: # require a100 for llama
266293

267294
# resume from a specific checkpoint dir
268295
resume_from_checkpoint:
@@ -288,11 +315,17 @@ fsdp_config:
288315
# Deepspeed
289316
deepspeed:
290317

291-
# TODO
318+
# Path to torch distx for optim 'adamw_anyprecision'
292319
torchdistx_path:
293320

321+
# Set padding for data collator to 'longest'
322+
collator_pad_to_longest:
323+
294324
# Debug mode
295325
debug:
326+
327+
# Seed
328+
seed:
296329
```
297330
298331
</details>
@@ -317,12 +350,16 @@ accelerate launch scripts/finetune.py configs/your_config.yml
317350

318351
### Inference
319352

320-
Add `--inference` flag to train command above
353+
Pass the appropriate flag to the train command:
321354

322-
If you are inferencing a pretrained LORA, pass
323-
```bash
324-
--lora_model_dir ./completed-model
325-
```
355+
- Pretrained LORA:
356+
```bash
357+
--inference --lora_model_dir ./completed-model
358+
```
359+
- Full weights finetune:
360+
```bash
361+
--inference --base_model ./completed-model
362+
```
326363

327364
### Merge LORA to base
328365

@@ -341,6 +378,10 @@ Please reduce any below
341378
- `eval_batch_size`
342379
- `sequence_len`
343380

381+
> RuntimeError: expected scalar type Float but found Half
382+
383+
Try set `fp16: true`
384+
344385
## Contributing 🤝
345386

346387
Bugs? Please check for open issue else create a new [Issue](https://github.com/OpenAccess-AI-Collective/axolotl/issues/new).

0 commit comments

Comments
 (0)