From e3e70f95516715ce6d4c7e653c7d96b554bd8254 Mon Sep 17 00:00:00 2001 From: Amine Abdaoui Date: Mon, 26 Apr 2021 15:08:43 +0200 Subject: [PATCH] docs(examples): fix link to TPU launcher script (#11427) --- examples/pytorch/README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/examples/pytorch/README.md b/examples/pytorch/README.md index c01ba6749db65f..541b5c7c84f3de 100644 --- a/examples/pytorch/README.md +++ b/examples/pytorch/README.md @@ -119,7 +119,7 @@ When using PyTorch, we support TPUs thanks to `pytorch/xla`. For more context an very detailed [pytorch/xla README](https://github.com/pytorch/xla/blob/master/README.md). In this repo, we provide a very simple launcher script named -[xla_spawn.py](https://github.com/huggingface/transformers/tree/master/examples/xla_spawn.py) that lets you run our +[xla_spawn.py](https://github.com/huggingface/transformers/tree/master/examples/pytorch/xla_spawn.py) that lets you run our example scripts on multiple TPU cores without any boilerplate. Just pass a `--num_cores` flag to this script, then your regular training script with its arguments (this is similar to the `torch.distributed.launch` helper for `torch.distributed`):