Skip to content

Commit

Permalink
Benchmark recommendations.
Browse files Browse the repository at this point in the history
PiperOrigin-RevId: 607599420
  • Loading branch information
vroulet authored and OptaxDev committed Feb 16, 2024
1 parent 967b2e6 commit ceb69a4
Showing 1 changed file with 19 additions and 0 deletions.
19 changes: 19 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -110,6 +110,25 @@ cd docs/
make html
```

## Benchmarks
If you feel lost in the crowd of available optimizers for deep learning, there
exist some extensive benchmarks:

[Benchmarking Neural Network Training Algorithms, Dahl G. et al, 2023](https://arxiv.org/pdf/2306.07179),

[Descending through a Crowded Valley — Benchmarking Deep Learning Optimizers, Schmidt R. et al, 2021](https://proceedings.mlr.press/v139/schmidt21a).

If you are interested in developing your own benchmark for some tasks,
consider the following framework

[Benchopt: Reproducible, efficient and collaborative optimization benchmarks, Moreau T. et al, 2022](https://arxiv.org/abs/2206.13424).

Finally, if you are searching for some recommendations on tuning optimizers,
consider taking a look at

[Deep Learning Tuning Playbook, Godbole V. et al, 2023](https://github.com/google-research/tuning_playbook).


## Citing Optax

This repository is part of the DeepMind JAX Ecosystem, to cite Optax
Expand Down

0 comments on commit ceb69a4

Please sign in to comment.