From 390bd14b3a4263cfe93e378badcdf87f6c4bb70e Mon Sep 17 00:00:00 2001
From: Philipp Schmid <32632186+philschmid@users.noreply.github.com>
Date: Sat, 27 Aug 2022 09:06:14 +0200
Subject: [PATCH 1/2] Fix broken link
---
docs/source/en/main_classes/deepspeed.mdx | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/docs/source/en/main_classes/deepspeed.mdx b/docs/source/en/main_classes/deepspeed.mdx
index 11831dbdc401da..a0d6dcc7769e79 100644
--- a/docs/source/en/main_classes/deepspeed.mdx
+++ b/docs/source/en/main_classes/deepspeed.mdx
@@ -37,7 +37,7 @@ won't be possible on a single GPU.
2. If you don't use [`Trainer`] and want to use your own Trainer where you integrated DeepSpeed
yourself, core functionality functions like `from_pretrained` and `from_config` include integration of essential
parts of DeepSpeed like `zero.Init` for ZeRO stage 3 and higher. To tap into this feature read the docs on
- [deepspeed-non-trainer-integration](#deepspeed-non-trainer-integration).
+ [non-Trainer DeepSpeed Integration](#nontrainer-deepspeed-integration).
What is integrated:
@@ -1849,7 +1849,6 @@ In this case you usually need to raise the value of `initial_scale_power`. Setti
-
## Non-Trainer Deepspeed Integration
From 35a0a0cb6108ade382fa89a561972872953e97c8 Mon Sep 17 00:00:00 2001
From: Stas Bekman
Date: Sun, 28 Aug 2022 19:19:00 -0700
Subject: [PATCH 2/2] Trigger CI