-
Notifications
You must be signed in to change notification settings - Fork 67
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Ideas for finite_difference_grad.py #182
Comments
Thanks for the suggestions. I agree with many of the suggestions you made for Yes, I think it would be a good idea to use the optimization step as a finite difference step and compare it with the projection of the gradient in that direction. For large steps, a significant disagreement can be expected, but one should expect the agreement to improve as the steps become smaller assuming the gradient and energy are consistent. This could be done as part of the geometry optimization loop so that the user can be warned when there's energy/gradient inconsistency. I don't think additional steps to improve the numerical gradient quality are necessary, but it could be nice if implemented cleanly. In fact it may be possible to use the energy change to "correct" the quantum chemical gradient, similar to how one updates the Hessian using BFGS, but I think that is a new research project. |
Does it allow to run N jobs on the same machine? I only have one for now :) |
Yes. You simply run the finite_difference_grad.py and multiple copies of work_queue_worker on the same machine. |
Another practical consideration: I would like to evaluate quality of gradients on 9-molecule cluster which we've discussed in another issue, however that system contains 180 atoms so it would require huge amount of resources to compute. However, evaluation of 3 points for a single step vector could be done quickly. |
Parallel::ForkManager
that manages process pool automatically, I'm pretty sure Python has something similar as well. Benefit is that sequential SCF calculations are more efficient, and also can take different number of iterations each — with separate processes they won't slow down each other. Another benefit is ability to use more processor cores/threads — AFAIK hybrid DFT doesn't get parallelization benefits after ~20 concurrent threads, while massively parallel approach would allow to use say 32 threads on the same machine without any synchronization overhead.-step
) to them we can check gradient alongside direction of step. This can even be done as a part of optimization algorithm if needed. Having cheap check would allow to perform such checks casually without need to allocate resources for a large task equivalent to numerical Hessian calculation.tqdm
, it can wrap any iterable and automatically generate nice progress bar in terminal. See e.g. annulen/vibAnalysis@334d920 for usage example.finite_difference_grad.py
should share at least part of command line argument definitions withparams.py
to avoid code duplication and at the same time allow using all relevant features. For example, I had to make the following patch for using ASE engine:However, other engine-specific arguments should also be handled in case users of respective engines need to check gradients. Perhaps
grp_software
has to be exported fromparams.py
, and maybe some other option groups too.On the other hand, I have experience with another module for handling command line arguments:
absl.flags
. It allows to define flags right in the modules where they are being used, and any script that uses those modules directly or indirectly will automatically be able to parse their flags fromargv
. Downside is an extra dependencies and a bit less user-friendly--help
. I can explain more about it if you are interested.The text was updated successfully, but these errors were encountered: