-
Notifications
You must be signed in to change notification settings - Fork 5.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
replace the AdagradOptimizer 、adamaxOptimizer、AdadeltaOptimizer、RMSPropOptimizer、LambOptimizer and Momentum #54152
Conversation
你的PR提交成功,感谢你对开源项目的贡献! |
✅ This PR's description meets the template requirements! |
Sorry to inform you that 26a1caf's CIs have passed for more than 7 days. To prevent PR conflicts, you need to re-run all CIs manually. |
…for AdagradOptimizer
@@ -249,7 +249,7 @@ def test_nesterov_momentum_optimizer(self): | |||
|
|||
|
|||
class TestAdagradOptimizer(unittest.TestCase): | |||
class MockAdagrad(optimizer.AdagradOptimizer): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这个单测测试的均为fluid.Optimizer,实际上2.0的Optimizer已有自己的单测test_xxx_api / test_xxx_op , 这里建议对应删除对应的测试case, 而非将测旧优化器的单测改为测新优化器
下同
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
已修改👌
@@ -410,9 +410,9 @@ def dygraph_adadelta_mp(self, use_amp, mp): | |||
paddle.set_device('gpu') | |||
input = paddle.randn((2, 2)) | |||
model = paddle.nn.Linear(2, 2) | |||
optimizer = paddle.fluid.optimizer.Adadelta( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
在这几个test_xxx_op.py
的单测文件中,目前多数同时存在例如TestAdadeltaMultiPrecision1_0
& TestAdadeltaMultiPrecision2_0
分别对paddle.fluid.Optimizer
和paddle.Optimizer
进行测试。
目前的改法是将1.0版本的测试替换成2.0,和原2.0版本的测试差别仅是初始化数值上的的差异。考虑到并不能覆盖到更多场景,同时单测的类名将有一定迷惑性(命名1_0,但是测2.0的API),建议把这部分单测都删掉
@@ -408,7 +408,7 @@ def static_adagrad_mp(self, use_amp, mp): | |||
exe = paddle.static.Executor('gpu') | |||
train_program = paddle.static.Program() | |||
startup_program = paddle.static.Program() | |||
optimizer = paddle.fluid.optimizer.Adagrad(learning_rate=0.001) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
同test_adadelta_op.py
,建议移除单测
test/legacy_test/test_adamax_op.py
Outdated
@@ -404,8 +404,8 @@ def dygraph_adamax_mp(self, use_amp, mp): | |||
paddle.set_device('gpu') | |||
input = paddle.randn((2, 2)) | |||
model = paddle.nn.Linear(2, 2) | |||
optimizer = paddle.fluid.optimizer.Adamax( | |||
learning_rate=0.001, parameter_list=model.parameters() | |||
optimizer = paddle.optimizer.Adamax( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
同test_adadelta_op.py
,建议移除单测
已修改👌 |
@zoooo0820 辛苦review下~ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
…opOptimizer、LambOptimizer and Momentum (PaddlePaddle#54152) * replace the AdadeltaOptimizer with Adadelta * replace the RMSPropOptimizer with RMSProp * replace the LambOptimizer with lamb * replace the momentum in contrib/optimizer.py with Momentum in python/paddle/optimizer/momentum.py * fix bug * fix bug * fix bug * fix bug of Lamp * fix bug of Lamp * fix bug of import * replace the AdamaxOptimizer with Admax and change the optimizer base for AdagradOptimizer * fix bug * fix bug * Update optimizer.py * fix bug * fix bug
…opOptimizer、LambOptimizer and Momentum (PaddlePaddle#54152) * replace the AdadeltaOptimizer with Adadelta * replace the RMSPropOptimizer with RMSProp * replace the LambOptimizer with lamb * replace the momentum in contrib/optimizer.py with Momentum in python/paddle/optimizer/momentum.py * fix bug * fix bug * fix bug * fix bug of Lamp * fix bug of Lamp * fix bug of import * replace the AdamaxOptimizer with Admax and change the optimizer base for AdagradOptimizer * fix bug * fix bug * Update optimizer.py * fix bug * fix bug
PR types
Others
PR changes
APIs
Description