From c065025c4720a87783ca504a9018454893d00649 Mon Sep 17 00:00:00 2001 From: Stas Bekman Date: Tue, 4 May 2021 14:17:11 -0700 Subject: [PATCH] [trainer] document resume randomness (#11588) * document resume randomness * fix link * reword * fix * reword * style --- docs/source/main_classes/trainer.rst | 14 ++++++++++++++ 1 file changed, 14 insertions(+) diff --git a/docs/source/main_classes/trainer.rst b/docs/source/main_classes/trainer.rst index b0401750f159cf..9fc88a658a337f 100644 --- a/docs/source/main_classes/trainer.rst +++ b/docs/source/main_classes/trainer.rst @@ -119,6 +119,20 @@ TFTrainingArguments :members: +Randomness +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +When resuming from a checkpoint generated by :class:`~transformers.Trainer` all efforts are made to restore the +`python`, `numpy` and `pytorch` RNG states to the same states as they were at the moment of saving that checkpoint, +which should make the "stop and resume" style of training as close as possible to non-stop training. + +However, due to various default non-deterministic pytorch settings this might not fully work. If you want full +determinism please refer to `Controlling sources of randomness +`__. As explained in the document, that some of those settings +that make things determinstic (.e.g., ``torch.backends.cudnn.deterministic``) may slow things down, therefore this +can't be done by default, but you can enable those yourself if needed. + + Trainer Integrations ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~