-
Notifications
You must be signed in to change notification settings - Fork 290
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Tell Ipopt
it is a Feasibility Problem
#597
Comments
I am not aware of an option for this. --- a/src/Algorithm/IpOrigIpoptNLP.cpp
+++ b/src/Algorithm/IpOrigIpoptNLP.cpp
@@ -507,7 +507,7 @@ SmartPtr<const Vector> OrigIpoptNLP::grad_f(
{
SmartPtr<Vector> unscaled_grad_f;
SmartPtr<const Vector> retValue;
- if( !grad_f_cache_.GetCachedResult1Dep(retValue, &x) )
+ if( !grad_f_cache_.GetCachedResult1Dep(retValue, NULL) )
{
grad_f_evals_++;
unscaled_grad_f = x_space_->MakeNew();
@@ -519,7 +519,7 @@ SmartPtr<const Vector> OrigIpoptNLP::grad_f(
ASSERT_EXCEPTION(success && IsFiniteNumber(unscaled_grad_f->Nrm2()), Eval_Error,
"Error evaluating the gradient of the objective function");
retValue = NLP_scaling()->apply_grad_obj_scaling(ConstPtr(unscaled_grad_f));
- grad_f_cache_.AddCachedResult1Dep(retValue, &x);
+ grad_f_cache_.AddCachedResult1Dep(retValue, NULL);
}
return retValue; Evaluation of the objective gradient should be called only one or two times then. |
Wow, that did indeed speed things up significantly! I applied the same additions to the evaluation of Number OrigIpoptNLP::f(
const Vector& x
)
{
DBG_START_METH("OrigIpoptNLP::f", dbg_verbosity);
Number ret = 0.0;
DBG_PRINT((2, "x.Tag = %u\n", x.GetTag()));
- if( !f_cache_.GetCachedResult1Dep(ret, &x) )
+ if( !f_cache_.GetCachedResult1Dep(ret, NULL) ) // CUSTOM ADDITION: For feasibility problems
{
f_evals_++;
SmartPtr<const Vector> unscaled_x = get_unscaled_x(x);
timing_statistics_.f_eval_time().Start();
bool success = nlp_->Eval_f(*unscaled_x, ret);
timing_statistics_.f_eval_time().End();
DBG_PRINT((1, "success = %d ret = %e\n", success, ret));
ASSERT_EXCEPTION(success && IsFiniteNumber(ret), Eval_Error, "Error evaluating the objective function");
ret = NLP_scaling()->apply_obj_scaling(ret);
- f_cache_.AddCachedResult1Dep(ret, &x);
+ f_cache_.AddCachedResult1Dep(ret, NULL); // CUSTOM ADDITION: For feasibility problems
}
return ret;
} But as you suspected, the performance increase is negligible compared to the objective gradient. Do you think this could be relatively easy integrated in the Code through an additional option? |
Yes, I will add an option to signal that the objective is linear, but not for a constant objective. |
Is there a way of telling
Ipopt
that one is "only" looking for a feasible point that satisfies the constraints?Currently, I supply a dummy objective and a zero-only objective gradient.
Reason I am asking this that I could imagine some speedup if the objective function and objective gradient are not called at all.
The text was updated successfully, but these errors were encountered: