-
Notifications
You must be signed in to change notification settings - Fork 4.7k
Closed
Description
Hi there! I have been using this configuration:
{
"zero_allow_untested_optimizer": true,
"fp16": {
"enabled": true,
"loss_scale": 0,
"loss_scale_window": 1000,
"hysteresis": 2,
"min_loss_scale": 1
},
"zero_optimization": {
"stage": 2,
"allgather_partitions": true,
"allgather_bucket_size": 2e6,
"reduce_scatter": true,
"reduce_bucket_size": 2e6,
"overlap_comm": false,
"contiguous_gradients": true,
"cpu_offload":true
},
"optimizer": {
"type": "AdamW",
"params": {
"lr": 5e-5,
"betas": [ 0.9, 0.999 ],
"eps": 1e-6,
"weight_decay": 0.01
}
},
"scheduler": {
"type": "WarmupLR",
"params": {
"warmup_min_lr": 0,
"warmup_max_lr": 5e-5,
"warmup_num_steps": 10000
}
}
}
To train a modified XLNet model (using the transformers library) on 4 1080ti's.
However after ~20 iterations, after the gradients scale correctly and training begins, it crashes in this function:
complete_grad_norm_calculation_for_cpu_offload(self, params):
total_norm = 0.0
norm_type = 2.0
for p in params:
if is_model_parallel_parameter(p) or (self.model_parallel_rank == 0):
param_id = self.get_param_id(p)
param_norm = self.norm_for_param_grads[param_id]
total_norm += param_norm.item()**2
With a key error in self.norm_for_param_grads[param_id].
I just sidestepped around this with a
try: param_norm = self.norm_for_param_grads[param_id] total_norm += param_norm.item()**2 except: pass
and it continues to train. Would anyone know what is happening?
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
No labels