Skip to content

"Native" support for parametric (LP/MILP) models #3215

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 8 commits into from

Conversation

sstroemer
Copy link

@sstroemer sstroemer commented Feb 13, 2023

Idea

This PR is based on the initial idea and discussion over at jump-dev/MathOptInterface.jl#2092. In contrast however, it builds up a pretty similar approach based mostly on JuMP functionality. It also does not rely on a bridge, and supports multiplicative parameters in the objective functions (and to some degree duals of the parameters).

Preface: there are a lot of rough edges, this is just a first proof-of-concept

This spends some additional time during model building to setup the proper data structure, but on parameter updates / new iterations it is almost as fast as possible (there is a single LinearAlgebra.dot call overhead - and that should be pretty fast for SparseVectors)!

Rough outline

Example 1

Create a model and enable parameters for it:

import HiGHS
using JuMP

model = JuMP.Model(HiGHS.Optimizer)
enable_parameters!(model)

We can now add parameters like variables, and bind them to initial values:

@variable(model, x[1:3] >= 0)

@parameter(model, p[1:2], [1.0, 1.0])
@parameter(model, c[1:3], [1.0, 1.5, 3.0])
@parameter(model, q, 5.0)

We can construct a paramteric constraint using

@pconstraint(model, constr, (p[1] + p[2]) * x[1] + 2*p[2]*x[2] + 2*x[1] + x[3] >= q)

You'll notice that this constraint actually reads 2 x[1] + x[3] - q ≥ 0.0 currently. This is due to the fact, that multiplicative parameters are only substituted before calling the solver (so they are currently considered to be 0 from the constraint's point of view).

We can also define an objective using parameters:

@objective(model, ParametricMin, sum((4.0 - i) * c[i] * x[i] for i in 1:3))

And now we just call

optimize!(model)

Let's look at the constraint now

constr

results in 4 x[1] + x[3] - q + 2 x[2] ≥ 0.0

and the objective

objective_function(model)

results in 3 x[3] + 3 x[2] + 3 x[1].

We can also query

reduced_cost(q)     # => 0.75

but that needs to be handled with care since it only works for parameters that do not occur in a multiplicative way.

Updating a parameter is as simple as

set_value(p[2], 0.0)

which again does not affect the constraints immediately, but only as soon as we actually call optimize!(...). It is therefore pretty fast.


Example 2

Using expressions works like we would expect - except for the fact that my constraint macro is ... lacking. But that can be fixed. See spoiler
for full example:

model = JuMP.Model(HiGHS.Optimizer)
enable_parameters!(model)

@variable(model, x[1:3] >= 0)
@parameter(model, p[1:2], [1.0, 1.0]; fix=false)
@parameter(model, q, 5.0)

# todo "model[:e1] is bound to the AffExpr
e1 = @pexpression(model, [i=1:2], x[i] + 1)

add_to_expression!(e1[1], x[2]*p[1])
add_to_expression!(e1[2], x[3]*p[2])

e2 = @pexpression(model, q + 5)

@pconstraint(model, c[i=1:2], e1[i] + e2 >= 10)

@objective(model, Min, sum(x))

optimize!(model)
print(c[1])

set_value(p[1], 5.0; fix=false)

optimize!(model)
print(c[1])

Benchmark

I am using - similar to the MOI PR - a comparison against doing it manually. See code in the spoiler:

import HiGHS
import Gurobi
using JuMP



function build_and_pass_bm(N::Int64, opt)
    @info "Build model"
    @time begin
        model = direct_model(opt)
        set_silent(model)
        set_string_names_on_creation(model, false)

        @variable(model, x[1:N])
        @constraint(model, c[i=1:N], 1.5 * x[i] >= 1)
        # @constraint(model, b[i=1:N], x[i] >= (mod(i, 2) == 0 ? 5 : 0))
        @objective(model, Min, sum(x))
    end
    @info "First optimize"
    @time optimize!(model)
    @info "Update"
    @time begin 
        set_normalized_coefficient.(c, x, (2.0 * 1.5) .* ones(N))
    end
    @info "Re-optimize"
    @time optimize!(model)
    @info "Total"
    return model
end

function build_and_pass(N::Int64, opt)
    @info "Build model"
    @time begin
        model = direct_model(opt)
        set_silent(model)
        set_string_names_on_creation(model, false)
        enable_parameters!(model)

        @variable(model, x[1:N])
        # @parameter(model, p[1:N], ones(N); fix=false)  # <-- performance is bad
        @variable(model, p[1:N])
        for i in 1:N
            model.ext[:__parameters].parameters[p[i]] = ParameterData(UInt64(i), 1.0)
        end

        @pconstraint(model, c[i=1:N], 1.5 * p[i] * x[i] >= 1)
        # @constraint(model, b[i=1:N], x[i] >= (mod(i, 2) == 0 ? 5 : 0))
        @objective(model, Min, sum(x))
    end
    @info "First optimize"
    @time optimize!(model)
    @info "Update"
    @time begin 
        set_value.(p, 2.0 .* ones(N); fix=false)
    end
    @info "Re-optimize"
    @time optimize!(model)
    @info "Total"
    return model
end

@time build_and_pass(1, Gurobi.Optimizer());
@time build_and_pass_bm(1, Gurobi.Optimizer());

GC.gc()
@time _m = build_and_pass(50_000, Gurobi.Optimizer());
GC.gc()
@time _m = build_and_pass_bm(50_000, Gurobi.Optimizer());

Results

The best possible (manual) timings are:

[ Info: Build model
  0.074890 seconds (1.15 M allocations: 80.642 MiB)
[ Info: First optimize
  0.034082 seconds (34 allocations: 1.047 KiB)
[ Info: Update
  0.008517 seconds (100.01 k allocations: 2.671 MiB)
[ Info: Re-optimize
  0.012197 seconds (21 allocations: 656 bytes)
[ Info: Total
  0.172960 seconds (1.25 M allocations: 83.346 MiB)

The timings of the parametric model are:

[ Info: Build model
  0.224829 seconds (3.50 M allocations: 255.444 MiB, 22.86% gc time)
[ Info: First optimize
  0.075105 seconds (499.53 k allocations: 20.211 MiB)
[ Info: Update
  0.009579 seconds (300.02 k allocations: 8.011 MiB)
[ Info: Re-optimize
  0.052549 seconds (549.52 k allocations: 21.737 MiB)
[ Info: Total
  0.387364 seconds (4.85 M allocations: 305.430 MiB, 13.27% gc time)

Notes

  • This is currently slower on the build step than the MOI approach (but actually a faster once we consider the first solve too), but faster on subsequent updates and reoptimizes
  • For a model with only parametric constraints (so every single one needs to be updated), the update and re-optimize is already faster than fully rebuilding and optimizing. In reality, there will be constraints that are not parametric, and building the model will require "external" data handling (e.g. for constructing the constraints based on some configuration) that will make "not re-building" a lot faster.
  • I am using Gurobi for this at the moment, since HiGHS struggles with cases with "more" variables,
    (see also this comment by odow).
  • The build time for the "first iteration" is bad... However, this enables something that can not be reasonably done manually: constructing a constraint out of multiple expressions and then modifying a parameter (in the manual case, we would not easily know what the correct variable coefficient is based on an update to only a specific parameter).

Open points

  • benchmark against ParameterJuMP / POI (see this comment by joaquimg)
  • my macros are a total mess; the @parameter macro is a lot slower than doing what I thought it does manually so this is completely wrong; the expression/constraint macro need to properly bind their results in the calling scope
  • I have "hacked" into the objective creation - there is most likely a better way to do this in the end

credits and history: discourse (I've tried bulding expressions, building functions, using DynamicExpressions.jl, ...)

cc because maybe interested: @fleimgruber

@sstroemer
Copy link
Author

Closing this in favor of the proposal in jump-dev/MathOptInterface.jl#2094 by @odow, which talks about a much more general approach by adding support for parameters directly in MOI.

@sstroemer sstroemer closed this Feb 14, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

Successfully merging this pull request may close these issues.

1 participant