-
Notifications
You must be signed in to change notification settings - Fork 724
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
【Hackathon No.13】为 Paddle 新增 CyclicLR 优化调度器 #4315
Changes from 4 commits
a812d01
e5497cf
9a3b49c
d25b134
5e6d334
55042c7
5c53520
0ee880e
373d2e8
69e8968
895636e
4de27fa
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,62 @@ | ||
.. _cn_api_paddle_optimizer_lr_CyclicLR: | ||
|
||
CyclicLR | ||
----------------------------------- | ||
|
||
.. py:class:: paddle.optimizer.lr.CyclicLR(base_learning_rate, max_learning_rate, step_size_up, step_size_down, mode, gamma, scale_fn, scale_mode, last_epoch, verbose) | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. gamma -> exp_gamma |
||
|
||
该接口提供一种学习率按固定频率在两个边界之间循环的策略。 | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. 已修改 |
||
|
||
该策略将学习率调整的过程视为一个又一个的循环,学习率根据指定的缩放策略以固定的频率在最大和最小学习率之间变化。 | ||
|
||
相关论文: `Cyclic Learning Rates for Training Neural Networks <https://arxiv.org/abs/1506.01186>`_ | ||
|
||
内置了三种学习率缩放策略,分别如下: | ||
- **triangular**: 没有任何缩放的三角循环。 | ||
- **triangular2**:每个三角循环里将初始幅度缩放一半。 | ||
- **exp_range**:每个循环中将初始幅度按照指数函数进行缩放,公式为 :math:`gamma^{cycle iterations}`。 | ||
|
||
初始幅度由`max_learning_rate - base_learning_rate`定义,:math:`gamma` 为一个常量,:math:`cycle iterations` 表示`cycle`数或'iterations'数。 | ||
cycle定义为:math:`cycle = 1 + floor(epoch / (step_size_up + step_size_down))` , 需要注意的是,CyclicLR应在每个批次的训练后调用step,因此这里的epoch等同于iterations,都表示当前实际迭代数。 | ||
|
||
参数 | ||
:::::::::::: | ||
|
||
- **base_learning_rate** (float) - 初始学习率,也是学习率变化的下边界。 | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. 和英文对齐语义,其他部分也同理,需要中英文对齐语义 There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. 已修改 |
||
- **max_learning_rate** (float) - 最大学习率,需要注意的是,实际的学习率由 ``base_learning_rate`` 与初始幅度的缩放求和而来,因此实际学习率可能达不到 ``max_learning_rate`` 。 | ||
- **step_size_up** (int) - 学习率从初始学习率增长到最大学习率所需步数。 | ||
- **step_size_down** (int,可选) - 学习率从最大学习率下降到初始学习率所需步数。若未指定,则其值默认等于 ``step_size_up`` 。 | ||
- **mode** (str,可选) - 可以是'triangular'、'triangular2'或者'exp_range',对应策略已在上文描述,当`scale_fn`被指定时时,该参数将被忽略。默认值:'triangular'。 | ||
- **exp_gamma** (float,可选) - 'exp_range'缩放函数中的常量。默认值:1,0。 | ||
- **sacle_fn** (function, 可选) - 一个有且仅有单个参数的函数,且对于任意的输入x,都必须满足0 <= scale_fn(x) <= 1;如果该参数被指定,则会忽略`mode`参数。默认值: ``False`` 。 | ||
- **scale_mode** (str,可选) - 'cycle'或者'iterations',表示缩放函数使用`cycle`数或`iterations`数作为输入。 | ||
- **last_epoch** (int,可选) - 上一轮的轮数,重启训练时设置为上一轮的epoch数。默认值为 -1,则为初始学习率。 | ||
- **verbose** (bool,可选) - 如果是 ``True`` ,则在每一轮更新时在标准输出 `stdout` 输出一条信息。默认值为 ``False`` 。 | ||
|
||
返回: | ||
:::::::::::: | ||
用于调整学习率的``CyclicLR``实例对象。 | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. 这里 `` 前后要空格 There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. down |
||
|
||
代码示例 | ||
:::::::::::: | ||
|
||
COPY-FROM: paddle.optimizer.lr.CyclicLR | ||
|
||
方法 | ||
:::::::::::: | ||
step(epoch=None) | ||
''''''''' | ||
|
||
step函数需要在优化器的 `optimizer.step()` 函数之后调用,调用之后将会根据epoch数来更新学习率,更新之后的学习率将会在优化器下一轮更新参数时使用。 | ||
|
||
**参数** | ||
|
||
- **epoch** (int,可选) - 指定具体的epoch数。默认值None,此时将会从-1自动累加 ``epoch`` 数。 | ||
|
||
**返回** | ||
|
||
无。 | ||
|
||
**代码示例** | ||
|
||
参照上述示例代码。 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
http://preview-pr-4315.paddle-docs-preview.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/optimizer/lr/CyclicLR_cn.html
可以对着预览检查一下格式,目前有些格式问题,比如以下: