Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Optimizer Design and related Operator #3655

Closed
jacquesqiao opened this issue Aug 23, 2017 · 2 comments
Closed

Optimizer Design and related Operator #3655

jacquesqiao opened this issue Aug 23, 2017 · 2 comments
Assignees

Comments

@jacquesqiao
Copy link
Member

jacquesqiao commented Aug 23, 2017

  1. Implement various optimizer operators.
    1. SGD (done) and others listed in https://github.com/PaddlePaddle/Paddle/projects/22
  2. Add optimizer operators into BlockDesc.
  3. Python module

In the new framework with operators, we will use operators to build optimizer, and operators will be add to some block. There are some works to do:

  • write a python wrapper of optimizer and provide proper interface for user to use.
  • optimizer of multi GPU and multi machine is different then single device.
    • because it will be a seperate stage to update the parameters.
    • it will communicate with parameter server

there are three situation to be considered:

  1. single machine single device.
  2. single machine multiple devices.
  3. multiple machine.

Plan:

  • 8/24/2017 do some survey and discussion about tensorflow, caffe2 and pytorch on the python side. [done] Optimizer Survey #3672
  • 8/25/2017 do some survey and discussion about distribute optimizers.
  • 8/25/2017 design doc

distribute related issue: #3656

@jacquesqiao
Copy link
Member Author

close this old one and open a new issue: #4679

heavengate pushed a commit to heavengate/Paddle that referenced this issue Aug 16, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants