Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add mlflow callback for pushing config to mlflow artifacts #1125

Merged

Conversation

JohanWork
Copy link
Contributor

Adding a callback to push the axolotl config to mlflow artifacts. Similare setup as exists for wandb already.

@JohanWork JohanWork changed the title Adding mlflow callback config Add mlflow callback for pushing config to mlflow artifacts Jan 15, 2024
@JohanWork JohanWork mentioned this pull request Jan 15, 2024
5 tasks
@winglian
Copy link
Collaborator

@JohanWork do you have some screen shots to confirm verify this functionality? thanks

@JohanWork
Copy link
Contributor Author

Sure wil add it!

@JohanWork
Copy link
Contributor Author

JohanWork commented Jan 18, 2024

Here is screen shot, also added the config used below for reference. @winglian
Screenshot 2024-01-18 at 15 58 53

base_model: TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T
model_type: LlamaForCausalLM
tokenizer_type: LlamaTokenizer
is_llama_derived_model: true

load_in_8bit: false
load_in_4bit: true
strict: false

datasets:
  - path: mhenrichsen/alpaca_2k_test
    type: alpaca
dataset_prepared_path:
val_set_size: 0.05
output_dir: ./qlora-out

adapter: qlora
lora_model_dir:

sequence_len: 1096
sample_packing: true
pad_to_sequence_len: true

lora_r: 32
lora_alpha: 16
lora_dropout: 0.05
lora_target_modules:
lora_target_linear: true
lora_fan_in_fan_out:

wandb_project:
wandb_entity:
wandb_watch:
wandb_name:
wandb_log_model:

mlflow_experiment_name: test-test

gradient_accumulation_steps: 1
micro_batch_size: 1
num_epochs: 4
optimizer: paged_adamw_32bit
lr_scheduler: cosine
learning_rate: 0.0002

train_on_inputs: false
group_by_length: false
bf16: false
fp16: true
tf32: false

gradient_checkpointing: true
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: false

warmup_steps: 10
evals_per_epoch: 4
saves_per_epoch: 1
debug:
deepspeed:
weight_decay: 0.0
fsdp:
fsdp_config:
special_tokens:

@winglian winglian merged commit b8e5603 into axolotl-ai-cloud:main Jan 22, 2024
7 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants