Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adding dpo training #1209

Open
wants to merge 24 commits into
base: main
Choose a base branch
from

Conversation

Goekdeniz-Guelmez
Copy link
Contributor

Training:

python -m mlx_lm.lora \
    --model mlx-community/Josiefied-Qwen2.5-0.5B-Instruct-abliterated-v1-4bit \
    --train \
    --data /Users/gokdenizgulmez/Desktop/dpo_test_data \
    --iters 100 \
    --batch-size 1 \
    --num-layers 1 \
    --val-batches 1 \
    --steps-per-report 1 \
    --adapter-path /Users/gokdenizgulmez/Desktop/test-dpo \
    --max-seq-length 1024 \
    --grad-checkpoint \
    --training-mode dpo \
    --fine-tune-type lora \
    --dpo-loss-type sigmoid \
    --beta 0.1 \
    --steps-per-eval 50

Output:

Iter 1: Val loss 0.693, Val chosen reward 0.000, Val rejected reward 0.000, Val took 0.464s
Iter 1: Train loss 0.693, Chosen reward 0.000, Rejected reward 0.000, Learning Rate 1.000e-05, It/sec 1.030, Tokens/sec 557.185, Trained Tokens 541.0, Peak mem 2.238 GB
Iter 2: Train loss 0.666, Chosen reward -0.211, Rejected reward -0.266, Learning Rate 1.000e-05, It/sec 1.019, Tokens/sec 585.139, Trained Tokens 1115.0, Peak mem 2.334 GB
Iter 3: Train loss 0.610, Chosen reward -0.459, Rejected reward -0.631, Learning Rate 1.000e-05, It/sec 1.107, Tokens/sec 587.773, Trained Tokens 1646.0, Peak mem 2.334 GB
Iter 4: Train loss 0.560, Chosen reward -0.507, Rejected reward -0.793, Learning Rate 1.000e-05, It/sec 1.131, Tokens/sec 593.797, Trained Tokens 2171.0, Peak mem 2.334 GB
Iter 5: Train loss 0.408, Chosen reward -1.085, Rejected reward -1.772, Learning Rate 1.000e-05, It/sec 1.120, Tokens/sec 587.962, Trained Tokens 2696.0, Peak mem 2.334 GB
Iter 6: Train loss 0.074, Chosen reward -1.023, Rejected reward -3.596, Learning Rate 1.000e-05, It/sec 0.991, Tokens/sec 535.864, Trained Tokens 3237.0, Peak mem 2.334 GB
Iter 7: Train loss 0.151, Chosen reward -2.117, Rejected reward -3.933, Learning Rate 1.000e-05, It/sec 1.066, Tokens/sec 566.253, Trained Tokens 3768.0, Peak mem 2.334 GB
Iter 8: Train loss 0.033, Chosen reward -1.216, Rejected reward -4.615, Learning Rate 1.000e-05, It/sec 1.015, Tokens/sec 582.525, Trained Tokens 4342.0, Peak mem 2.334 GB
Iter 9: Train loss 0.023, Chosen reward -1.698, Rejected reward -5.460, Learning Rate 1.000e-05, It/sec 1.009, Tokens/sec 579.044, Trained Tokens 4916.0, Peak mem 2.334 GB
Iter 10: Train loss 0.027, Chosen reward -3.629, Rejected reward -7.210, Learning Rate 1.000e-05, It/sec 1.113, Tokens/sec 584.280, Trained Tokens 5441.0, Peak mem 2.334 GB
Iter 11: Train loss 0.013, Chosen reward -3.373, Rejected reward -7.736, Learning Rate 1.000e-05, It/sec 1.045, Tokens/sec 565.236, Trained Tokens 5982.0, Peak mem 2.334 GB
Iter 12: Train loss 0.021, Chosen reward -4.290, Rejected reward -8.149, Learning Rate 1.000e-05, It/sec 1.122, Tokens/sec 595.913, Trained Tokens 6513.0, Peak mem 2.334 GB
Iter 13: Train loss 0.015, Chosen reward -4.516, Rejected reward -8.693, Learning Rate 1.000e-05, It/sec 1.131, Tokens/sec 600.378, Trained Tokens 7044.0, Peak mem 2.334 GB
Iter 14: Train loss 0.004, Chosen reward -5.051, Rejected reward -10.507, Learning Rate 1.000e-05, It/sec 1.129, Tokens/sec 592.925, Trained Tokens 7569.0, Peak mem 2.334 GB
Iter 15: Train loss 0.005, Chosen reward -5.147, Rejected reward -10.429, Learning Rate 1.000e-05, It/sec 1.037, Tokens/sec 561.228, Trained Tokens 8110.0, Peak mem 2.334 GB
Iter 16: Train loss 0.005, Chosen reward -3.917, Rejected reward -9.296, Learning Rate 1.000e-05, It/sec 1.021, Tokens/sec 585.813, Trained Tokens 8684.0, Peak mem 2.334 GB
Iter 17: Train loss 0.005, Chosen reward -5.966, Rejected reward -11.340, Learning Rate 1.000e-05, It/sec 1.130, Tokens/sec 600.253, Trained Tokens 9215.0, Peak mem 2.334 GB
Iter 18: Train loss 0.003, Chosen reward -5.729, Rejected reward -11.517, Learning Rate 1.000e-05, It/sec 1.022, Tokens/sec 553.065, Trained Tokens 9756.0, Peak mem 2.334 GB
Iter 19: Train loss 0.003, Chosen reward -4.463, Rejected reward -10.221, Learning Rate 1.000e-05, It/sec 1.020, Tokens/sec 585.313, Trained Tokens 10330.0, Peak mem 2.334 GB
Iter 20: Train loss 0.001, Chosen reward -6.572, Rejected reward -13.383, Learning Rate 1.000e-05, It/sec 1.126, Tokens/sec 590.914, Trained Tokens 10855.0, Peak mem 2.334 GB
Iter 21: Train loss 0.001, Chosen reward -6.767, Rejected reward -13.741, Learning Rate 1.000e-05, It/sec 1.117, Tokens/sec 586.599, Trained Tokens 11380.0, Peak mem 2.334 GB
Iter 22: Train loss 0.002, Chosen reward -4.739, Rejected reward -10.764, Learning Rate 1.000e-05, It/sec 1.022, Tokens/sec 586.627, Trained Tokens 11954.0, Peak mem 2.334 GB
Iter 23: Train loss 0.002, Chosen reward -6.555, Rejected reward -12.969, Learning Rate 1.000e-05, It/sec 1.039, Tokens/sec 562.062, Trained Tokens 12495.0, Peak mem 2.334 GB
Iter 24: Train loss 0.001, Chosen reward -6.910, Rejected reward -13.659, Learning Rate 1.000e-05, It/sec 1.132, Tokens/sec 600.834, Trained Tokens 13026.0, Peak mem 2.334 GB
Iter 25: Train loss 0.001, Chosen reward -7.058, Rejected reward -13.945, Learning Rate 1.000e-05, It/sec 1.128, Tokens/sec 599.030, Trained Tokens 13557.0, Peak mem 2.334 GB
Iter 26: Train loss 0.001, Chosen reward -7.374, Rejected reward -14.935, Learning Rate 1.000e-05, It/sec 1.124, Tokens/sec 589.858, Trained Tokens 14082.0, Peak mem 2.334 GB
Iter 27: Train loss 0.001, Chosen reward -6.922, Rejected reward -13.668, Learning Rate 1.000e-05, It/sec 1.007, Tokens/sec 545.049, Trained Tokens 14623.0, Peak mem 2.334 GB
Iter 28: Train loss 0.002, Chosen reward -5.298, Rejected reward -11.647, Learning Rate 1.000e-05, It/sec 0.999, Tokens/sec 573.311, Trained Tokens 15197.0, Peak mem 2.334 GB
Iter 29: Train loss 0.000, Chosen reward -7.491, Rejected reward -15.261, Learning Rate 1.000e-05, It/sec 1.102, Tokens/sec 578.727, Trained Tokens 15722.0, Peak mem 2.334 GB
Iter 30: Train loss 0.001, Chosen reward -6.822, Rejected reward -13.766, Learning Rate 1.000e-05, It/sec 1.029, Tokens/sec 556.426, Trained Tokens 16263.0, Peak mem 2.334 GB
Iter 31: Train loss 0.002, Chosen reward -5.426, Rejected reward -11.906, Learning Rate 1.000e-05, It/sec 1.015, Tokens/sec 582.556, Trained Tokens 16837.0, Peak mem 2.334 GB
Iter 32: Train loss 0.001, Chosen reward -7.318, Rejected reward -14.881, Learning Rate 1.000e-05, It/sec 1.120, Tokens/sec 594.497, Trained Tokens 17368.0, Peak mem 2.334 GB
Iter 33: Train loss 0.001, Chosen reward -7.054, Rejected reward -14.140, Learning Rate 1.000e-05, It/sec 1.040, Tokens/sec 562.842, Trained Tokens 17909.0, Peak mem 2.334 GB
Iter 34: Train loss 0.001, Chosen reward -5.511, Rejected reward -12.067, Learning Rate 1.000e-05, It/sec 1.006, Tokens/sec 577.408, Trained Tokens 18483.0, Peak mem 2.334 GB
Iter 35: Train loss 0.000, Chosen reward -7.680, Rejected reward -15.788, Learning Rate 1.000e-05, It/sec 1.105, Tokens/sec 580.117, Trained Tokens 19008.0, Peak mem 2.334 GB
Iter 36: Train loss 0.000, Chosen reward -7.479, Rejected reward -15.274, Learning Rate 1.000e-05, It/sec 1.126, Tokens/sec 597.779, Trained Tokens 19539.0, Peak mem 2.334 GB
Iter 37: Train loss 0.001, Chosen reward -7.159, Rejected reward -14.403, Learning Rate 1.000e-05, It/sec 1.042, Tokens/sec 563.799, Trained Tokens 20080.0, Peak mem 2.334 GB
Iter 38: Train loss 0.000, Chosen reward -7.623, Rejected reward -15.486, Learning Rate 1.000e-05, It/sec 1.135, Tokens/sec 602.490, Trained Tokens 20611.0, Peak mem 2.334 GB
Iter 39: Train loss 0.001, Chosen reward -5.714, Rejected reward -12.384, Learning Rate 1.000e-05, It/sec 1.011, Tokens/sec 580.155, Trained Tokens 21185.0, Peak mem 2.334 GB
Iter 40: Train loss 0.000, Chosen reward -7.896, Rejected reward -16.162, Learning Rate 1.000e-05, It/sec 1.132, Tokens/sec 594.551, Trained Tokens 21710.0, Peak mem 2.334 GB
Iter 41: Train loss 0.000, Chosen reward -7.980, Rejected reward -16.246, Learning Rate 1.000e-05, It/sec 1.135, Tokens/sec 596.033, Trained Tokens 22235.0, Peak mem 2.334 GB
Iter 42: Train loss 0.001, Chosen reward -5.777, Rejected reward -12.502, Learning Rate 1.000e-05, It/sec 1.016, Tokens/sec 582.927, Trained Tokens 22809.0, Peak mem 2.334 GB
Iter 43: Train loss 0.001, Chosen reward -7.428, Rejected reward -14.793, Learning Rate 1.000e-05, It/sec 1.053, Tokens/sec 569.502, Trained Tokens 23350.0, Peak mem 2.334 GB
Iter 44: Train loss 0.000, Chosen reward -7.785, Rejected reward -15.877, Learning Rate 1.000e-05, It/sec 1.134, Tokens/sec 602.412, Trained Tokens 23881.0, Peak mem 2.334 GB
Iter 45: Train loss 0.000, Chosen reward -7.810, Rejected reward -15.927, Learning Rate 1.000e-05, It/sec 1.131, Tokens/sec 600.577, Trained Tokens 24412.0, Peak mem 2.334 GB
Iter 46: Train loss 0.001, Chosen reward -7.656, Rejected reward -15.083, Learning Rate 1.000e-05, It/sec 1.046, Tokens/sec 566.153, Trained Tokens 24953.0, Peak mem 2.334 GB
Iter 47: Train loss 0.000, Chosen reward -8.164, Rejected reward -16.568, Learning Rate 1.000e-05, It/sec 1.122, Tokens/sec 588.969, Trained Tokens 25478.0, Peak mem 2.334 GB
Iter 48: Train loss 0.001, Chosen reward -5.861, Rejected reward -12.659, Learning Rate 1.000e-05, It/sec 1.009, Tokens/sec 579.426, Trained Tokens 26052.0, Peak mem 2.334 GB
Iter 49: Train loss 0.001, Chosen reward -7.548, Rejected reward -15.025, Learning Rate 1.000e-05, It/sec 1.014, Tokens/sec 548.629, Trained Tokens 26593.0, Peak mem 2.334 GB
Iter 50: Val loss 0.024, Val chosen reward -9.163, Val rejected reward -12.868, Val took 0.450s
Iter 50: Train loss 0.000, Chosen reward -7.985, Rejected reward -16.186, Learning Rate 1.000e-05, It/sec 1.111, Tokens/sec 589.999, Trained Tokens 27124.0, Peak mem 2.334 GB
Iter 51: Train loss 0.000, Chosen reward -8.226, Rejected reward -16.677, Learning Rate 1.000e-05, It/sec 1.117, Tokens/sec 586.201, Trained Tokens 27649.0, Peak mem 2.334 GB
Iter 52: Train loss 0.001, Chosen reward -5.932, Rejected reward -12.789, Learning Rate 1.000e-05, It/sec 0.975, Tokens/sec 559.839, Trained Tokens 28223.0, Peak mem 2.334 GB
Iter 53: Train loss 0.000, Chosen reward -8.259, Rejected reward -16.733, Learning Rate 1.000e-05, It/sec 1.115, Tokens/sec 585.311, Trained Tokens 28748.0, Peak mem 2.334 GB
Iter 54: Train loss 0.001, Chosen reward -7.767, Rejected reward -15.317, Learning Rate 1.000e-05, It/sec 1.031, Tokens/sec 557.960, Trained Tokens 29289.0, Peak mem 2.334 GB
Iter 55: Train loss 0.000, Chosen reward -7.992, Rejected reward -16.276, Learning Rate 1.000e-05, It/sec 1.114, Tokens/sec 591.380, Trained Tokens 29820.0, Peak mem 2.334 GB
Iter 56: Train loss 0.001, Chosen reward -5.916, Rejected reward -12.794, Learning Rate 1.000e-05, It/sec 1.002, Tokens/sec 574.906, Trained Tokens 30394.0, Peak mem 2.334 GB
Iter 57: Train loss 0.000, Chosen reward -8.352, Rejected reward -16.850, Learning Rate 1.000e-05, It/sec 1.105, Tokens/sec 579.930, Trained Tokens 30919.0, Peak mem 2.334 GB
Iter 58: Train loss 0.000, Chosen reward -8.088, Rejected reward -16.403, Learning Rate 1.000e-05, It/sec 1.125, Tokens/sec 597.197, Trained Tokens 31450.0, Peak mem 2.334 GB
Iter 59: Train loss 0.001, Chosen reward -7.888, Rejected reward -15.452, Learning Rate 1.000e-05, It/sec 1.026, Tokens/sec 555.014, Trained Tokens 31991.0, Peak mem 2.334 GB
Iter 60: Train loss 0.001, Chosen reward -6.067, Rejected reward -12.971, Learning Rate 1.000e-05, It/sec 1.018, Tokens/sec 584.545, Trained Tokens 32565.0, Peak mem 2.334 GB
Iter 61: Train loss 0.001, Chosen reward -5.942, Rejected reward -12.871, Learning Rate 1.000e-05, It/sec 1.025, Tokens/sec 588.439, Trained Tokens 33139.0, Peak mem 2.334 GB
Iter 62: Train loss 0.001, Chosen reward -7.672, Rejected reward -15.263, Learning Rate 1.000e-05, It/sec 1.019, Tokens/sec 551.120, Trained Tokens 33680.0, Peak mem 2.334 GB
Iter 63: Train loss 0.000, Chosen reward -8.056, Rejected reward -16.406, Learning Rate 1.000e-05, It/sec 1.133, Tokens/sec 601.667, Trained Tokens 34211.0, Peak mem 2.334 GB
Iter 64: Train loss 0.000, Chosen reward -8.242, Rejected reward -16.768, Learning Rate 1.000e-05, It/sec 1.130, Tokens/sec 593.061, Trained Tokens 34736.0, Peak mem 2.334 GB
Iter 65: Train loss 0.000, Chosen reward -7.887, Rejected reward -15.516, Learning Rate 1.000e-05, It/sec 1.033, Tokens/sec 558.785, Trained Tokens 35277.0, Peak mem 2.334 GB
Iter 66: Train loss 0.001, Chosen reward -6.110, Rejected reward -13.061, Learning Rate 1.000e-05, It/sec 1.018, Tokens/sec 584.581, Trained Tokens 35851.0, Peak mem 2.334 GB
Iter 67: Train loss 0.000, Chosen reward -8.406, Rejected reward -16.965, Learning Rate 1.000e-05, It/sec 1.112, Tokens/sec 583.652, Trained Tokens 36376.0, Peak mem 2.334 GB
Iter 68: Train loss 0.000, Chosen reward -8.185, Rejected reward -16.584, Learning Rate 1.000e-05, It/sec 1.115, Tokens/sec 592.150, Trained Tokens 36907.0, Peak mem 2.334 GB
Iter 69: Train loss 0.001, Chosen reward -6.123, Rejected reward -13.101, Learning Rate 1.000e-05, It/sec 1.019, Tokens/sec 584.980, Trained Tokens 37481.0, Peak mem 2.334 GB
Iter 70: Train loss 0.000, Chosen reward -8.077, Rejected reward -16.485, Learning Rate 1.000e-05, It/sec 1.131, Tokens/sec 600.679, Trained Tokens 38012.0, Peak mem 2.334 GB
Iter 71: Train loss 0.000, Chosen reward -8.431, Rejected reward -16.999, Learning Rate 1.000e-05, It/sec 1.124, Tokens/sec 589.899, Trained Tokens 38537.0, Peak mem 2.334 GB
Iter 72: Train loss 0.000, Chosen reward -7.784, Rejected reward -15.443, Learning Rate 1.000e-05, It/sec 1.040, Tokens/sec 562.830, Trained Tokens 39078.0, Peak mem 2.334 GB
Iter 73: Train loss 0.000, Chosen reward -8.261, Rejected reward -16.844, Learning Rate 1.000e-05, It/sec 1.090, Tokens/sec 572.326, Trained Tokens 39603.0, Peak mem 2.334 GB
Iter 74: Train loss 0.000, Chosen reward -8.130, Rejected reward -16.546, Learning Rate 1.000e-05, It/sec 1.102, Tokens/sec 585.374, Trained Tokens 40134.0, Peak mem 2.334 GB
Iter 75: Train loss 0.000, Chosen reward -8.013, Rejected reward -15.691, Learning Rate 1.000e-05, It/sec 1.032, Tokens/sec 558.245, Trained Tokens 40675.0, Peak mem 2.334 GB
Iter 76: Train loss 0.001, Chosen reward -6.139, Rejected reward -13.167, Learning Rate 1.000e-05, It/sec 1.023, Tokens/sec 587.235, Trained Tokens 41249.0, Peak mem 2.334 GB
Iter 77: Train loss 0.000, Chosen reward -8.220, Rejected reward -16.655, Learning Rate 1.000e-05, It/sec 1.139, Tokens/sec 604.765, Trained Tokens 41780.0, Peak mem 2.334 GB
Iter 78: Train loss 0.001, Chosen reward -6.017, Rejected reward -13.055, Learning Rate 1.000e-05, It/sec 1.025, Tokens/sec 588.620, Trained Tokens 42354.0, Peak mem 2.334 GB
Iter 79: Train loss 0.000, Chosen reward -8.383, Rejected reward -16.996, Learning Rate 1.000e-05, It/sec 1.132, Tokens/sec 594.097, Trained Tokens 42879.0, Peak mem 2.334 GB
Iter 80: Train loss 0.000, Chosen reward -7.955, Rejected reward -15.647, Learning Rate 1.000e-05, It/sec 1.042, Tokens/sec 563.934, Trained Tokens 43420.0, Peak mem 2.334 GB
Iter 81: Train loss 0.000, Chosen reward -8.096, Rejected reward -16.572, Learning Rate 1.000e-05, It/sec 1.114, Tokens/sec 591.462, Trained Tokens 43951.0, Peak mem 2.334 GB
Iter 82: Train loss 0.000, Chosen reward -7.869, Rejected reward -15.571, Learning Rate 1.000e-05, It/sec 1.016, Tokens/sec 549.539, Trained Tokens 44492.0, Peak mem 2.334 GB
Iter 83: Train loss 0.001, Chosen reward -6.155, Rejected reward -13.212, Learning Rate 1.000e-05, It/sec 1.017, Tokens/sec 583.529, Trained Tokens 45066.0, Peak mem 2.334 GB
Iter 84: Train loss 0.000, Chosen reward -8.548, Rejected reward -17.152, Learning Rate 1.000e-05, It/sec 1.106, Tokens/sec 580.696, Trained Tokens 45591.0, Peak mem 2.334 GB
Iter 85: Train loss 0.000, Chosen reward -7.857, Rejected reward -15.594, Learning Rate 1.000e-05, It/sec 1.018, Tokens/sec 550.657, Trained Tokens 46132.0, Peak mem 2.334 GB
Iter 86: Train loss 0.000, Chosen reward -8.648, Rejected reward -17.273, Learning Rate 1.000e-05, It/sec 1.113, Tokens/sec 584.434, Trained Tokens 46657.0, Peak mem 2.334 GB
Iter 87: Train loss 0.001, Chosen reward -6.176, Rejected reward -13.275, Learning Rate 1.000e-05, It/sec 1.018, Tokens/sec 584.217, Trained Tokens 47231.0, Peak mem 2.334 GB
Iter 88: Train loss 0.000, Chosen reward -8.148, Rejected reward -16.665, Learning Rate 1.000e-05, It/sec 1.126, Tokens/sec 597.860, Trained Tokens 47762.0, Peak mem 2.334 GB
Iter 89: Train loss 0.000, Chosen reward -7.948, Rejected reward -15.684, Learning Rate 1.000e-05, It/sec 1.020, Tokens/sec 551.883, Trained Tokens 48303.0, Peak mem 2.334 GB
Iter 90: Train loss 0.000, Chosen reward -8.251, Rejected reward -16.776, Learning Rate 1.000e-05, It/sec 1.125, Tokens/sec 597.383, Trained Tokens 48834.0, Peak mem 2.334 GB
Iter 91: Train loss 0.001, Chosen reward -6.303, Rejected reward -13.426, Learning Rate 1.000e-05, It/sec 1.021, Tokens/sec 585.794, Trained Tokens 49408.0, Peak mem 2.334 GB
Iter 92: Train loss 0.000, Chosen reward -8.543, Rejected reward -17.184, Learning Rate 1.000e-05, It/sec 1.132, Tokens/sec 594.203, Trained Tokens 49933.0, Peak mem 2.334 GB
Iter 93: Train loss 0.000, Chosen reward -7.847, Rejected reward -15.604, Learning Rate 1.000e-05, It/sec 1.012, Tokens/sec 547.715, Trained Tokens 50474.0, Peak mem 2.334 GB
Iter 94: Train loss 0.000, Chosen reward -8.305, Rejected reward -16.845, Learning Rate 1.000e-05, It/sec 1.082, Tokens/sec 574.324, Trained Tokens 51005.0, Peak mem 2.334 GB
Iter 95: Train loss 0.000, Chosen reward -8.568, Rejected reward -17.224, Learning Rate 1.000e-05, It/sec 1.097, Tokens/sec 576.063, Trained Tokens 51530.0, Peak mem 2.334 GB
Iter 96: Train loss 0.001, Chosen reward -6.237, Rejected reward -13.395, Learning Rate 1.000e-05, It/sec 0.974, Tokens/sec 558.978, Trained Tokens 52104.0, Peak mem 2.334 GB
Iter 97: Train loss 0.000, Chosen reward -8.634, Rejected reward -17.300, Learning Rate 1.000e-05, It/sec 1.078, Tokens/sec 565.927, Trained Tokens 52629.0, Peak mem 2.334 GB
Iter 98: Train loss 0.000, Chosen reward -8.200, Rejected reward -16.764, Learning Rate 1.000e-05, It/sec 1.120, Tokens/sec 594.872, Trained Tokens 53160.0, Peak mem 2.334 GB
Iter 99: Train loss 0.001, Chosen reward -6.265, Rejected reward -13.419, Learning Rate 1.000e-05, It/sec 1.011, Tokens/sec 580.569, Trained Tokens 53734.0, Peak mem 2.334 GB
Iter 100: Val loss 0.019, Val chosen reward -9.489, Val rejected reward -13.427, Val took 0.438s
Iter 100: Train loss 0.000, Chosen reward -7.941, Rejected reward -15.751, Learning Rate 1.000e-05, It/sec 1.034, Tokens/sec 559.430, Trained Tokens 54275.0, Peak mem 2.334 GB
Iter 100: Saved adapter weights to /Users/gokdenizgulmez/Desktop/test-dpo/adapters.safetensors and /Users/gokdenizgulmez/Desktop/test-dpo/0000100_adapters.safetensors.
Saved final weights to /Users/gokdenizgulmez/Desktop/test-dpo/adapters.safetensors.

@madroidmaq
Copy link
Contributor

madroidmaq commented Jan 22, 2025

Wow! Thank you so much for your PR! I've been wanting to try this feature for a long time! If possible, could you upload the dataset to Hugging Face? Just like mlx-community/wikisql. This way, others can get the process running super quickly! And there's no need to worry about finding datasets in the early stages either!

Adjust as follows:

python -m mlx_lm.lora \
    --model mlx-community/Josiefied-Qwen2.5-0.5B-Instruct-abliterated-v1-4bit \
    --train \
-   --data /Users/gokdenizgulmez/Desktop/dpo_test_data \
+   --data mlx-community/dpo-dataset \
    --iters 100 \
    --batch-size 1 \
    --num-layers 1 \
    --val-batches 1 \
    --steps-per-report 1 \
    --adapter-path /Users/gokdenizgulmez/Desktop/test-dpo \
    --max-seq-length 1024 \
    --grad-checkpoint \
    --training-mode dpo \
    --fine-tune-type lora \
    --dpo-loss-type sigmoid \
    --beta 0.1 \
    --steps-per-eval 50

@Goekdeniz-Guelmez
Copy link
Contributor Author

@madroidmaq yea, I’ll do that when I’m home 👍

@Goekdeniz-Guelmez
Copy link
Contributor Author

Goekdeniz-Guelmez commented Jan 22, 2025

You can get the dataset via mlx-community/DPO-test or:

python -m mlx_lm.lora \
    --model mlx-community/Josiefied-Qwen2.5-0.5B-Instruct-abliterated-v1-4bit \
    --train \
    --data mlx-community/DPO-test \
    --iters 100 \
    --batch-size 1 \
    --num-layers 1 \
    --val-batches 1 \
    --steps-per-report 1 \
    --adapter-path path/to/adapters \
    --max-seq-length 1024 \
    --grad-checkpoint \
    --training-mode dpo \
    --fine-tune-type lora \
    --dpo-loss-type sigmoid \
    --beta 0.1 \
    --steps-per-eval 50

@ivanfioravanti
Copy link
Contributor

This is amazing! Thanks @Goekdeniz-Guelmez TOP TOP TOP!

@Goekdeniz-Guelmez
Copy link
Contributor Author

Thanks so much!!! @ivanfioravanti

@chimezie
Copy link
Contributor

Indeed. Very long-awaited, considering (TBH) the current architectural brittleness of "the state of the art" in HF-based preference optimization. I've been holding off doing any PO training with those tools until it can be done natively in mlx, and I'm glad we now have PRs for this. Thank you for your service, sir.

@Goekdeniz-Guelmez
Copy link
Contributor Author

@chimezie thanks for the response! I completely agree, DPO training was long overdue :)

@Goekdeniz-Guelmez
Copy link
Contributor Author

Model: Qwen/Qwen2.5-3B-Instruct, Datasetmlx-community/DPO-test.

Prompt: what's up

Before: I'm just a program, so I don't have real-time awareness of my surroundings. How can I assist you today?

After: Not much, how about you? What's up?

Training args:

python -m mlx_lm.lora \
    --model Qwen/Qwen2.5-3B-Instruct \
    --train \
    --test \
    --num-layers 8 \
    --data /Users/gokdenizgulmez/Desktop/dpo_test_data \
    --iters 20 \
    --batch-size 2 \
    --val-batches 1 \
    --steps-per-report 1 \
    --adapter-path /Users/gokdenizgulmez/Desktop/dpo-full \
    --max-seq-length 1024 \
    --grad-checkpoint \
    --training-mode dpo \
    --fine-tune-type lora \
    --dpo-loss-type sigmoid \
    --beta 0.1 \
    --steps-per-eval 500 \
    --test-batches 1

Output:

Starting DPO training..., iters: 20
Iter 1: Val loss 0.69314718, Val chosen reward 0.000, Val rejected reward 0.000, Val took 1.986s
Iter 1: Train loss 0.69314718, Chosen reward 0.000, Rejected reward 0.000, Learning Rate 1.000e-05, It/sec 0.139, Tokens/sec 94.418, Trained Tokens 679.0, Peak mem 14.629 GB
Iter 2: Train loss 0.52381301, Chosen reward 0.954, Rejected reward 0.580, Learning Rate 1.000e-05, It/sec 0.109, Tokens/sec 96.351, Trained Tokens 1564.0, Peak mem 15.923 GB
Iter 3: Train loss 0.31744388, Chosen reward 1.096, Rejected reward 0.036, Learning Rate 1.000e-05, It/sec 0.100, Tokens/sec 100.993, Trained Tokens 2576.0, Peak mem 16.125 GB
Iter 4: Train loss 0.11990404, Chosen reward 1.687, Rejected reward -0.395, Learning Rate 1.000e-05, It/sec 0.138, Tokens/sec 98.028, Trained Tokens 3285.0, Peak mem 16.125 GB
Iter 5: Train loss 0.06170468, Chosen reward 1.881, Rejected reward -0.931, Learning Rate 1.000e-05, It/sec 0.136, Tokens/sec 111.598, Trained Tokens 4105.0, Peak mem 16.125 GB
Iter 6: Train loss 0.66945851, Chosen reward -0.398, Rejected reward -0.639, Learning Rate 1.000e-05, It/sec 0.185, Tokens/sec 104.633, Trained Tokens 4671.0, Peak mem 16.125 GB
Iter 7: Train loss 0.05284286, Chosen reward 1.651, Rejected reward -1.283, Learning Rate 1.000e-05, It/sec 0.158, Tokens/sec 104.765, Trained Tokens 5334.0, Peak mem 16.125 GB
Iter 8: Train loss 0.05613055, Chosen reward 1.919, Rejected reward -1.047, Learning Rate 1.000e-05, It/sec 0.123, Tokens/sec 125.102, Trained Tokens 6352.0, Peak mem 16.125 GB
Iter 9: Train loss 0.00954697, Chosen reward 2.704, Rejected reward -2.188, Learning Rate 1.000e-05, It/sec 0.159, Tokens/sec 110.622, Trained Tokens 7047.0, Peak mem 16.125 GB
Iter 10: Train loss 0.00161777, Chosen reward 3.228, Rejected reward -3.210, Learning Rate 1.000e-05, It/sec 0.138, Tokens/sec 109.469, Trained Tokens 7838.0, Peak mem 16.125 GB
Iter 11: Train loss 0.05984394, Chosen reward 1.428, Rejected reward -1.491, Learning Rate 1.000e-05, It/sec 0.135, Tokens/sec 90.904, Trained Tokens 8511.0, Peak mem 16.125 GB
Iter 12: Train loss 0.00029336, Chosen reward 3.605, Rejected reward -4.926, Learning Rate 1.000e-05, It/sec 0.180, Tokens/sec 119.693, Trained Tokens 9176.0, Peak mem 16.125 GB
Iter 13: Train loss 0.00643726, Chosen reward 0.970, Rejected reward -4.860, Learning Rate 1.000e-05, It/sec 0.136, Tokens/sec 108.719, Trained Tokens 9977.0, Peak mem 16.125 GB
Iter 14: Train loss 0.00050419, Chosen reward 5.039, Rejected reward -3.148, Learning Rate 1.000e-05, It/sec 0.097, Tokens/sec 108.070, Trained Tokens 11092.0, Peak mem 16.205 GB
Iter 15: Train loss 0.00188465, Chosen reward 1.447, Rejected reward -5.128, Learning Rate 1.000e-05, It/sec 0.169, Tokens/sec 96.303, Trained Tokens 11662.0, Peak mem 16.205 GB
Iter 16: Train loss 0.00063427, Chosen reward 2.362, Rejected reward -5.828, Learning Rate 1.000e-05, It/sec 0.218, Tokens/sec 118.330, Trained Tokens 12206.0, Peak mem 16.205 GB
Iter 17: Train loss 0.00474209, Chosen reward 3.257, Rejected reward -3.558, Learning Rate 1.000e-05, It/sec 0.098, Tokens/sec 108.870, Trained Tokens 13312.0, Peak mem 16.205 GB
Iter 18: Train loss 0.00004771, Chosen reward 3.737, Rejected reward -7.121, Learning Rate 1.000e-05, It/sec 0.157, Tokens/sec 101.783, Trained Tokens 13959.0, Peak mem 16.205 GB
Iter 19: Train loss 0.00000356, Chosen reward 4.071, Rejected reward -8.476, Learning Rate 1.000e-05, It/sec 0.138, Tokens/sec 100.560, Trained Tokens 14687.0, Peak mem 16.205 GB
Iter 20: Val loss 0.00023831, Val chosen reward 1.139, Val rejected reward -7.550, Val took 2.021s
Iter 20: Train loss 0.00001388, Chosen reward 4.409, Rejected reward -6.799, Learning Rate 1.000e-05, It/sec 0.138, Tokens/sec 101.364, Trained Tokens 15422.0, Peak mem 16.205 GB
Saved final weights to /Users/gokdenizgulmez/Desktop/dpo-full/adapters.safetensors.
Testing
Test loss 0.69314718, Rewards: 0.00000000, 0.00000000

@chimezie
Copy link
Contributor

@Goekdeniz-Guelmez For this PR and #1210 , it would be useful to also report the reward accuracies and margins as well, since those are the primary measures for the preference optimization.

See how they are calculated in HF's trl DPO trainer, for example

@Goekdeniz-Guelmez
Copy link
Contributor Author

Goekdeniz-Guelmez commented Jan 26, 2025

@chimezie

Iter 1: Val loss 0.69314718, Val chosen reward 0.000, Val rejected reward 0.000, Val accuracy 0.000, Val margin 0.000, Val took 1.941s
Iter 1: Train loss 0.69314718, Chosen reward 0.000, Rejected reward 0.000, Accuracy 0.000, Margin 0.000, Learning Rate 1.000e-05, It/sec 0.141, Tokens/sec 95.431, Trained Tokens 679.0, Peak mem 14.636 GB
Iter 2: Train loss 0.49983707, Chosen reward 0.304, Rejected reward -0.173, Accuracy 1.000, Margin 0.476, Learning Rate 1.000e-05, It/sec 0.109, Tokens/sec 96.268, Trained Tokens 1564.0, Peak mem 15.923 GB
Iter 3: Train loss 0.29690582, Chosen reward 1.239, Rejected reward 0.130, Accuracy 2.000, Margin 1.585, Learning Rate 1.000e-05, It/sec 0.101, Tokens/sec 101.869, Trained Tokens 2576.0, Peak mem 16.125 GB
Iter 4: Train loss 0.11287212, Chosen reward 1.498, Rejected reward -0.629, Accuracy 3.000, Margin 3.712, Learning Rate 1.000e-05, It/sec 0.136, Tokens/sec 96.193, Trained Tokens 3285.0, Peak mem 16.125 GB

@ivanfioravanti
Copy link
Contributor

You rock @Goekdeniz-Guelmez 🔥

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants