Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

【Hackathon 5 No.21】 Add doc of paddle.optimizer.lr.LinearLR #6219

Merged
merged 3 commits into from
Oct 17, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 2 additions & 0 deletions docs/api/paddle/optimizer/lr/LRScheduler_cn.rst
Original file line number Diff line number Diff line change
Expand Up @@ -39,6 +39,8 @@ LRScheduler

* :code:`CyclicLR`: Cyclic 学习率衰减,其将学习率变化的过程视为一个又一个循环,学习率根据固定的频率在最小和最大学习率之间不停变化。请参考 :ref:`cn_api_paddle_optimizer_lr_CyclicLR`。

* :code:`LinearLR`: 学习率随 step 数线性增加到指定学习率。 请参考 :ref:`cn_api_paddle_optimizer_lr_LinearLR`。

你可以继承该基类实现任意的学习率策略,导出基类的方法为 ``from paddle.optimizer.lr import LRScheduler`` ,
必须要重写该基类的 ``get_lr()`` 函数,否则会抛出 ``NotImplementedError`` 异常。

Expand Down
49 changes: 49 additions & 0 deletions docs/api/paddle/optimizer/lr/LinearLR_cn.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,49 @@
.. _cn_api_paddle_optimizer_lr_LinearLR:

LinearLR
-----------------------------------

.. py:class:: paddle.optimizer.lr.LinearLR(learning_rate, total_steps, start_factor=1./3, end_factor=1.0, last_epoch=-1, verbose=False)


该接口提供一种学习率优化策略-线性学习率对学习率进行调整。


参数
::::::::::::

- **learning_rate** (float) - 基础学习率,用于确定初始学习率和最终学习率。
- **total_steps** (float) - 学习率从初始学习率线性增长到最终学习率所需要的步数。
- **start_factor** (float) - 初始学习率因子,通过 `learning_rate * start_factor` 确定。
- **end_factor** (float) - 最终学习率因子,通过 `learning_rate * end_factor` 确定。
- **last_epoch** (int,可选) - 上一轮的轮数,重启训练时设置为上一轮的 epoch 数。默认值为 -1,则为初始学习率。
- **verbose** (bool,可选) - 如果是 ``True``,则在每一轮更新时在标准输出 `stdout` 输出一条信息。默认值为 ``False`` 。

返回
::::::::::::
用于调整学习率的 ``LinearLR`` 实例对象。

代码示例
::::::::::::

COPY-FROM: paddle.optimizer.lr.LinearLR:code-dynamic
COPY-FROM: paddle.optimizer.lr.LinearLR:code-static

方法
::::::::::::
step(epoch=None)
'''''''''

step 函数需要在优化器的 `optimizer.step()` 函数之后调用,调用之后将会根据 epoch 数来更新学习率,更新之后的学习率将会在优化器下一轮更新参数时使用。

**参数**

- **epoch** (int,可选) - 指定具体的 epoch 数。默认值 None,此时将会从-1 自动累加 ``epoch`` 数。

**返回**

无。

**代码示例**

参照上述示例代码。
3 changes: 3 additions & 0 deletions docs/api_guides/low_level/layers/learning_rate_scheduler.rst
Original file line number Diff line number Diff line change
Expand Up @@ -61,3 +61,6 @@

* :code:`CyclicLR`: 学习率根据指定的缩放策略以固定频率在最小和最大学习率之间进行循环。
相关 API Reference 请参考 :ref:`_cn_api_paddle_optimizer_lr_CyclicLR`

* :code:`LinearLR`: 学习率随 step 数线性增加到指定学习率。
相关 API Reference 请参考 :ref:`_cn_api_paddle_optimizer_lr_LinearLR`
Original file line number Diff line number Diff line change
Expand Up @@ -44,3 +44,5 @@ The following content describes the APIs related to the learning rate scheduler:
* :code:`OneCycleLR`: One cycle decay. That is, the initial learning rate first increases to maximum learning rate, and then it decreases to minimum learning rate which is much less than initial learning rate. For related API Reference please refer to :ref:`cn_api_paddle_optimizer_lr_OneCycleLR`

* :code:`CyclicLR`: Cyclic decay. That is, the learning rate cycles between minimum and maximum learning rate with a constant frequency in specified a scale method. For related API Reference please refer to :ref:`api_paddle_optimizer_lr_CyclicLR`

* :code:`LinearLR`: Linear decay. That is, the learning rate will be firstly multiplied by start_factor and linearly increase to end learning rate. For related API Reference please refer to :ref:`api_paddle_optimizer_lr_LinearLR`