Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Benchmark MOI #16

Closed
coroa opened this issue Jun 5, 2019 · 8 comments
Closed

Benchmark MOI #16

coroa opened this issue Jun 5, 2019 · 8 comments

Comments

@coroa
Copy link
Member

coroa commented Jun 5, 2019

MOI's direct mode allows to skip the model copy in JuMP and instead hand all data through to Gurobi. Repeat the earlier benchmark to compare memory consumption to PyPSA.

@coroa
Copy link
Member Author

coroa commented Jun 13, 2019

Hmmm ... okay, EnergyModels together with MOI is now on par with PyPSA, for solving a slightly modified pypsa-eur model with 181 buses as prepared in the benchmark-energymodels repo:

benchmark_energymodels_pypsa

Things to note

  1. The with_optimizer(Gurobi)branch, ie in caching mode, has a 15% memory reduction against the direct mode.
  2. direct mode has about the same memory consumption as PyPSA.
  3. Speed-wise the three options are mostly equivalent. Mostly because build-up time is dwarfed by solution time.

More detailed per-phase timings are at jonas/workflows/benchmark_energymodels/benchmarks. I'll review them later.

It's necessary to note that JuMP/MOI is still young and time/memory improvements are an active area of effort, for instance jump-dev/Gurobi.jl#216 (comment) is throwing out an intermediate layer between JuMP, MOI and Gurobi, showing memory reductions by 50%-85%.

@fneum
Copy link
Member

fneum commented Jun 13, 2019

That is two runs for each, caching mode and direct mode, in EnergyModels.jl vs. one run of PyPSA right?

@coroa
Copy link
Member Author

coroa commented Jun 13, 2019

Correct, I wanted to see the influence of the compiler as well, which adds some constant 180 - 200 secs to the first run.

@fneum
Copy link
Member

fneum commented Jun 13, 2019

My intuition would have been that the direct mode is less memory intensive as the caching mode. Is there a specific reason to expect direct mode to consume more memory?

@coroa
Copy link
Member Author

coroa commented Jun 13, 2019

That was my expectation as well. And also the case a few months ago, if the case presented in jump-dev/JuMP.jl#1905 is similar (see his slide on youtube for a better representation).

But, then jump-dev/MathOptInterface.jl#696 improved how MOI models are transferred to the solvers using the bulk functions add_variables and add_constraints instead of the single variable versions, which the direct models can't easily reproduce (jump-dev/JuMP.jl#1939). I suppose the bulk functions also lead to a more compressed form of metadata storage?

Well, all of that might change again with the Gurobi rewrite (jump-dev/Gurobi.jl#216 (comment)). Unfortunately, it is for MOI 0.9, which JuMP and quite a lot of other solvers is not compatible with yet, as one can track in jump-dev/MathOptInterface.jl#736.

@coroa
Copy link
Member Author

coroa commented Jul 9, 2019

Sooo, I ported JuMP to MOI 0.9 (jump-dev/JuMP.jl#2003) to get the new Gurobi and then got rid of saving variable and constraint names (https://github.com/coroa/Gurobi.jl/tree/jh/moi09_wo_names), and voila already quite an improvement over PyPSA (now with the 128 buses pypsa-eur)

benchmark_moi9

Got another idea, i'll try tomorrow to see how far one can push this.

@FabianHofmann
Copy link

great!

@coroa
Copy link
Member Author

coroa commented Jul 10, 2019

If I rip out all data caches from Gurobi.jl (drop stored var/constr names branch:moi09_w_dummynames, and disallow deletion of vars/constrs branch:moi09_no_deletions), I can remove all data from Julia except for arrays with column and row indices for keeping track of variables and constraints (which could in principle be removed by some sort of a range-like array reference, but the potential savings look quite marginal).

benchmark_energymodels_pypsa_moi9

The remaining question is, whether Gurobi is able to manage its own memory more efficiently if the variables and constraints are passed in in bunch, which I'll try now. It's not!

@coroa coroa closed this as completed Jul 10, 2019
coroa added a commit that referenced this issue Jul 16, 2019
Unnamed variables/constraints consume significantly less memory:
#16
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants