diff --git a/examples/flax/language-modeling/README.md b/examples/flax/language-modeling/README.md index 9b95d9ec0911bd..10a2a02f7f3af4 100644 --- a/examples/flax/language-modeling/README.md +++ b/examples/flax/language-modeling/README.md @@ -221,7 +221,7 @@ python run_clm_flax.py \ Training should converge at a loss and perplexity of 3.24 and 25.72 respectively after 20 epochs on a single TPUv3-8. This should take less than ~21 hours. -Training statistics can be accessed on [tfhub.de](https://tensorboard.dev/experiment/2zEhLwJ0Qp2FAkI3WVH9qA). +Training statistics can be accessed on [tfhub.dev](https://tensorboard.dev/experiment/2zEhLwJ0Qp2FAkI3WVH9qA). For a step-by-step walkthrough of how to do causal language modeling in Flax, please have a look at [this](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/causal_language_modeling_flax.ipynb) google colab. diff --git a/examples/flax/summarization/README.md b/examples/flax/summarization/README.md index c94b048ec88b42..2eb21f49b65fe2 100644 --- a/examples/flax/summarization/README.md +++ b/examples/flax/summarization/README.md @@ -30,6 +30,6 @@ python run_summarization_flax.py \ --push_to_hub ``` -This should finish in 37min, with validation loss and ROUGE2 score of 1.7785 and 17.01 respectively after 6 epochs. training statistics can be accessed on [tfhub.de](https://tensorboard.dev/experiment/OcPfOIgXRMSJqYB4RdK2tA/#scalars). +This should finish in 37min, with validation loss and ROUGE2 score of 1.7785 and 17.01 respectively after 6 epochs. training statistics can be accessed on [tfhub.dev](https://tensorboard.dev/experiment/OcPfOIgXRMSJqYB4RdK2tA/#scalars). > Note that here we used default `generate` arguments, using arguments specific for `xsum` dataset should give better ROUGE scores. diff --git a/examples/legacy/benchmarking/README.md b/examples/legacy/benchmarking/README.md index 03e174770d1077..63cf4e367c3d31 100644 --- a/examples/legacy/benchmarking/README.md +++ b/examples/legacy/benchmarking/README.md @@ -22,5 +22,5 @@ If you would like to list benchmark results on your favorite models of the [mode | Benchmark description | Results | Environment info | Author | |:----------|:-------------|:-------------|------:| -| PyTorch Benchmark on inference for `google-bert/bert-base-cased` |[memory](https://github.com/patrickvonplaten/files_to_link_to/blob/master/bert_benchmark/inference_memory.csv) | [env](https://github.com/patrickvonplaten/files_to_link_to/blob/master/bert_benchmark/env.csv) | [Partick von Platen](https://github.com/patrickvonplaten) | -| PyTorch Benchmark on inference for `google-bert/bert-base-cased` |[time](https://github.com/patrickvonplaten/files_to_link_to/blob/master/bert_benchmark/inference_time.csv) | [env](https://github.com/patrickvonplaten/files_to_link_to/blob/master/bert_benchmark/env.csv) | [Partick von Platen](https://github.com/patrickvonplaten) | +| PyTorch Benchmark on inference for `google-bert/bert-base-cased` |[memory](https://github.com/patrickvonplaten/files_to_link_to/blob/master/bert_benchmark/inference_memory.csv) | [env](https://github.com/patrickvonplaten/files_to_link_to/blob/master/bert_benchmark/env.csv) | [Patrick von Platen](https://github.com/patrickvonplaten) | +| PyTorch Benchmark on inference for `google-bert/bert-base-cased` |[time](https://github.com/patrickvonplaten/files_to_link_to/blob/master/bert_benchmark/inference_time.csv) | [env](https://github.com/patrickvonplaten/files_to_link_to/blob/master/bert_benchmark/env.csv) | [Patrick von Platen](https://github.com/patrickvonplaten) |