Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adam solver #2856

Closed
wants to merge 1 commit into from
Closed

Adam solver #2856

wants to merge 1 commit into from

Conversation

PatWie
Copy link
Contributor

@PatWie PatWie commented Aug 4, 2015

This commit implements the Adam solver by Kingma et. al for CPU and
GPU. All solver parameters are defined in the caffe.proto. This also
adds an example for the MNIST dataset.

see issue #2827

Before merging, please review the code. I will add changes to this branch (and rebase) if there should be something to change.

@shelhamer shelhamer added the focus label Aug 4, 2015
@shelhamer
Copy link
Member

@philkr could you review this if you have a chance?

optional float delta = 31 [default = 1e-8];
// parameters for the Adam solver
optional float beta1 = 37 [default = 0.9];
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why not use momentum here, to be consistent with other solvers?

@shelhamer
Copy link
Member

@PatWie thanks for the solver! All SGD solvers need gradient checks. See for instance the AdaGrad tests https://github.com/BVLC/caffe/blob/master/src/caffe/test/test_gradient_based_solver.cpp#L431-L483

const Dtype beta1 = this->param_.beta1();
const Dtype beta2 = this->param_.beta2();

const int t = this->iter_ / this->param_.stepsize() +1;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why divide by stepsize here?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think t is the epoch rather than the iteration by the definition of Caffe.

@PatWie
Copy link
Contributor Author

PatWie commented Aug 4, 2015

@shelhamer Ah. i didn't realized that there is already a unit-test. I will add one of course.

caffe_add(N,
this->val_t_[param_id]->cpu_data(),
this->val_m_[param_id]->cpu_data(),
this->val_m_[param_id]->mutable_cpu_data());
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The three commands above can be written as a single caffe_cpu_axpby using beta1 instead of 0 and val_m_ instead of val_t_.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah I see. Blas is completely new for me. I will change this tomorrow at all places in my code.

@Nanne
Copy link

Nanne commented Aug 5, 2015

I think there's some confusion going on in the code due to the usage of the stepsize param. Which is made worse by using lr_policy "step" in the MNIST example.

The alpha/stepsize from the paper should be set via the base_lr, and used with lr_policy: "fixed" as I don't see any recommendations for changing alpha during training. This way you can also get rid of gamma and power in the prototxt (the latter wasn't being used anyway).

The stepsize param should only be used together with lr_policy "step", and if we already set the alpha via base_lr then it is not needed at all. t can just be iter_ +1, as it's afaik not needed to compute the effective stepsize. This also removes the need for a check if stepsize > 0 in the header.

Moreover, it makes sense to change the MNIST example to use the recommended value from the paper for the base_lr (0.001) and explicitly set momentum and momentum2 to 0.9 and 0.999 respectively, rather than relying on the default values.

@PatWie
Copy link
Contributor Author

PatWie commented Aug 5, 2015

I applied all changes in memory usage, solver_mnist proto, t is now the current iteration instead of epoche. And there is a unit test.
There are still some trivial things todo:

  • update solver page in the wiki
  • implement learning rate heuristic 1/sqrt(t) for reproducing results from the paper

const Dtype beta2 = this->param_.momentum2();

// we create alias for memory from the SGD for convenience
shared_ptr<Blob<Dtype> > &val_m = this->history_[param_id];
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A reference to a shared pointer is never good idea, just copy the pointer.

@philkr
Copy link
Contributor

philkr commented Aug 5, 2015

The solver looks good to me now. I haven't tested it though. I do think that caffe_ipow should be it's own PR if we really want to add it. Currently it just bloats this PR request and doesn't add any benefit (see https://en.wikipedia.org/wiki/Amdahl%27s_law).

@ronghanghu ronghanghu added the RH label Aug 5, 2015

// update v <- \beta_2 m_{t-1} + (1-\beta_2)g_t^2
caffe_mul(N,
net_params[param_id]->cpu_diff(),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

please fix indentation -- use 4 space indents when continuing from previous lines (https://google-styleguide.googlecode.com/svn/trunk/cppguide.html#Spaces_vs._Tabs)

@ronghanghu
Copy link
Member

@PatWie Thank you for this great PR! I just added some comment on the code. Please fix indent to 4 space indents when continuing from previous lines, and add more test cases. After that, squash into one commit and I can merge.

@ronghanghu
Copy link
Member

#2836 and #2866 introduced new conflicts to be resolved.

@ronghanghu ronghanghu mentioned this pull request Aug 9, 2015
@shelhamer shelhamer mentioned this pull request Aug 9, 2015

template <typename Dtype>
void AdamSolver<Dtype>::ComputeUpdateValue(int param_id, Dtype rate) {
const vector<shared_ptr<Blob<Dtype> > >& net_params = this->net_->params();
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To be consistent with #2866, use const vector<Blob<Dtype>*>& net_params = this->net_->learnable_params(); instead.

This commit implements the Adam solver by Kingma et. al for CPU and
GPU. All solver parameters are defined in the caffe.proto. This also
adds an example for the MNIST dataset.
@PatWie
Copy link
Contributor Author

PatWie commented Aug 10, 2015

I rebased to the latest master branch commit and fixed up the conflicts with #2836 and #2866.

It seems to be difficult for me to write the other tests (although I am pretty sure, that the implementation is correct) without rewriting fairly large amount of code (mostly duplicating code). In addition, it is not clear what's the favored way and some details like, should the code prevent the usage of weight decay, regularization and other parameters. I think only the following tests make sense (checked the existing tests)

  • TestLeastSquaresSingle
  • TestLeastSquaresMultiIter
  • TestSnapshot
  • TestSnapshotShare

One has to deal with the following issues:

  • The snapshot function has to write/read 2 additional parameters (m,v) from the paper which depends on the actual iteration. Currently, it only supports the history-blob.
  • The test-implementation has to save m, v for testing the derivative. This depends on the actual iteration.
  • I misused the history_, update_, temp_ members, since Adam does not really rely on them directly. The current commit use again its own members from the Adam-class. How to handle this? Using the initialised members from SDG or use members from the Adam-class?

Possible solutions would be:

  • Add two additional fields to the SolverState blob and add members to SDGSolver, which would not be used by the other solvers

or

  • Copy and paste most of the SDGSolver class for Adam

This seems to be a serious design issue. Implementing the solver is fairly easy, but writing nearly the same code again and squeezing the code into the current testing class needs hacky solutions.

To refactore the unit test cases one can use the curiously recurring template pattern to put everything from the solvers into its derived classes and just refer to the members in the base class. But then again the testing method should use the solver as a template not just as a member field

Since, this needs profound changes in the code it would suggest to let a BVLC maintainer decide the next steps or how to rewrite the solver-interface w.r.t. to these issues.

I am glad to help, but don't want to rewrite mostly everything in reference to #2890.

@ronghanghu
Copy link
Member

@PatWie Thanks for your update!

I checked the math and am quite confident with your solver implementation. However, at this point the PR is still not working, because snapshot currently won't work for AdamSolver.

In addition, it is not clear what's the favored way and some details like, should the code prevent the usage of weight decay, regularization and other parameters.

Since right now all the weight decay for AdamSolver is handled in SGDSolver::Regularize, these weight decay tests become less important, thought it is still good to have them. It is also good have accumulation tests to test whether the gradient accumulation is working correctly.

The snapshot function has to write/read 2 additional parameters (m,v) from the paper which depends on the actual iteration. Currently, it only supports the history-blob.

The original design is to use SGDSolver::history_ to store all history blobs to be saved and recovered during snapshot. The size of history vector does not have to be same as number of learnable parameters . In Adam solver, you can twice as many history blobs as number of learnable parameters, in order to store both m and v. This can be done in a AdamPreSolve function. You may take a look at corresponding implementation in AdaDelta solver #2782 (which also stores two history variables) for how this is done.

https://github.com/matthiasplappert/caffe/blob/adadelta/src/caffe/solver.cpp#L937-L947
https://github.com/matthiasplappert/caffe/blob/adadelta/src/caffe/solver.cpp#L975

The test-implementation has to save m, v for testing the derivative. This depends on the actual iteration.

Test-implementation can have access to solver's history, you may also take a look at AdaDelta #2782 implementation: https://github.com/matthiasplappert/caffe/blob/adadelta/src/caffe/test/test_gradient_based_solver.cpp#L299-L307

I misused the history_, update_, temp_ members, since Adam does not really rely on them directly. The current commit use again its own members from the Adam-class. How to handle this? Using the initialised members from SDG or use members from the Adam-class?

You should use SGD::history_ for all history variables that need to be stored and recovered. See above.

To refactore the unit test cases one can use the curiously recurring template pattern to put everything from the solvers into its derived classes and just refer to the members in the base class. But then again the testing method should use the solver as a template not just as a member field
Since, this needs profound changes in the code it would suggest to let a BVLC maintainer decide the next steps or how to rewrite the solver-interface w.r.t. to these issues. I am glad to help, but don't want to rewrite mostly everything in reference to #2890.

I'll handle #2890 after merging Adam and AdaDelta solvers, and I would like to refactor these solvers and look into curiously recurring template pattern.

For now to get this PR merged, we need to make the snapshot work. So let's simply put val_m and val_v into SGD::history_, and read them during test implementation as in #2782, and complete the other 3 tests you think is reasonable. This shouldn't involve too much change in your current code. After that, we can merge this and I can handle the rest in #2890.

@jeffdonahue
Copy link
Contributor

Rather than putting both vectors into history_ (which I think would make the indexing pretty messy and unnatural), I'd recommend keeping the two vectors separate and overriding SnapshotSolverState to serialize both of them.

@ronghanghu
Copy link
Member

@PatWie after a private discussion with @jeffdonahue, I still feel using existing history (and expanding it, like what is done in #2782) would be easier to implement.

Rather than putting both vectors into history_ (which I think would make the indexing pretty messy and unnatural), I'd recommend keeping the two vectors separate and overriding SnapshotSolverState to serialize both of them.

For the indexing, you can create a reference variable like val_m_vec = this->history_; and val_v_vec = this->history_ + net_params.size(); (either in AdamSolver or its member functions as you like), and just use history_ as a storage buffer. In this way the current snapshot functions still work.

After merging this PR and AdaDelta, I can address #2890 and refactor solvers afterwards.

@ronghanghu
Copy link
Member

Another rebase is needed here after #2909 and #2903

@ronghanghu ronghanghu mentioned this pull request Aug 14, 2015
@ronghanghu
Copy link
Member

carried on in #2918, @PatWie please take a look when you have time.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants