Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Tell Ipopt it is a Feasibility Problem #597

Closed
DanielDoehring opened this issue Aug 13, 2022 · 3 comments
Closed

Tell Ipopt it is a Feasibility Problem #597

DanielDoehring opened this issue Aug 13, 2022 · 3 comments

Comments

@DanielDoehring
Copy link
Contributor

Is there a way of telling Ipopt that one is "only" looking for a feasible point that satisfies the constraints?
Currently, I supply a dummy objective and a zero-only objective gradient.

Reason I am asking this that I could imagine some speedup if the objective function and objective gradient are not called at all.

@svigerske
Copy link
Member

svigerske commented Aug 13, 2022

I am not aware of an option for this.
Following the logic for jac_c_constant, you can try this to see whether it is worth to add something:

--- a/src/Algorithm/IpOrigIpoptNLP.cpp
+++ b/src/Algorithm/IpOrigIpoptNLP.cpp
@@ -507,7 +507,7 @@ SmartPtr<const Vector> OrigIpoptNLP::grad_f(
 {
    SmartPtr<Vector> unscaled_grad_f;
    SmartPtr<const Vector> retValue;
-   if( !grad_f_cache_.GetCachedResult1Dep(retValue, &x) )
+   if( !grad_f_cache_.GetCachedResult1Dep(retValue, NULL) )
    {
       grad_f_evals_++;
       unscaled_grad_f = x_space_->MakeNew();
@@ -519,7 +519,7 @@ SmartPtr<const Vector> OrigIpoptNLP::grad_f(
       ASSERT_EXCEPTION(success && IsFiniteNumber(unscaled_grad_f->Nrm2()), Eval_Error,
                        "Error evaluating the gradient of the objective function");
       retValue = NLP_scaling()->apply_grad_obj_scaling(ConstPtr(unscaled_grad_f));
-      grad_f_cache_.AddCachedResult1Dep(retValue, &x);
+      grad_f_cache_.AddCachedResult1Dep(retValue, NULL);
    }
 
    return retValue;

Evaluation of the objective gradient should be called only one or two times then.
The objective function evaluation will still be called more often, but that should be cheap.

@DanielDoehring
Copy link
Contributor Author

DanielDoehring commented Aug 14, 2022

Wow, that did indeed speed things up significantly!

I applied the same additions to the evaluation of f here, i.e.,

Number OrigIpoptNLP::f(
   const Vector& x
)
{
   DBG_START_METH("OrigIpoptNLP::f", dbg_verbosity);
   Number ret = 0.0;
   DBG_PRINT((2, "x.Tag = %u\n", x.GetTag()));
- if( !f_cache_.GetCachedResult1Dep(ret, &x) )
+ if( !f_cache_.GetCachedResult1Dep(ret, NULL) ) // CUSTOM ADDITION: For feasibility problems
   {
      f_evals_++;
      SmartPtr<const Vector> unscaled_x = get_unscaled_x(x);
      timing_statistics_.f_eval_time().Start();
      bool success = nlp_->Eval_f(*unscaled_x, ret);
      timing_statistics_.f_eval_time().End();
      DBG_PRINT((1, "success = %d ret = %e\n", success, ret));
      ASSERT_EXCEPTION(success && IsFiniteNumber(ret), Eval_Error, "Error evaluating the objective function");
      ret = NLP_scaling()->apply_obj_scaling(ret);
- f_cache_.AddCachedResult1Dep(ret, &x);
+ f_cache_.AddCachedResult1Dep(ret, NULL); // CUSTOM ADDITION: For feasibility problems
   }

   return ret;
}

But as you suspected, the performance increase is negligible compared to the objective gradient.
FYI: Both objective function and -gradient are now called a single time.

Do you think this could be relatively easy integrated in the Code through an additional option?

@svigerske
Copy link
Member

Yes, I will add an option to signal that the objective is linear, but not for a constant objective.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants