Skip to content

Reduce time cost of grad norm clipping #60

@danbraunai-apollo

Description

@danbraunai-apollo

Grad norm clipping slows down runs by 20-25% for local runs and 5-15% slower for e2e runs. We have not thoroughly tested its value in these experiments.

We can either:

  1. Remove it completely. This should be tested by running a small sweep and comparing runs to those reported in the paper (i.e. any run in https://wandb.ai/sparsify/gpt2/.
  2. Only use it at the start of training when grad norms are high.

Metadata

Metadata

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions