Releases: qpsolvers/qpbenchmark
v1.1.0
This release adds two solvers to the benchmark: HPIPM and PIQP. It also invites everyone to submit results on their machines to the Results discussions.
Added
- Check consistency after loading results
- More CPU information in reports
- New solver: HPIPM
- New solver: PIQP
Changed
- Don't hard-wrap report lines, as it doesn't render well in Discussions
- Improve reporting of shifted geometric mean errors
- Make
cpuinfo
a proper dependency - Refactor results class to allow finer
check_results
sessions - Update to qpsolvers v4.0.0
Fixed
- Correct
None
values toFalse
in found column - Make sure found column has only boolean values
v1.0.0
This is the first release of the QP solvers benchmark, which ships a qpsolvers_benchmark
command-line tool contributed by @ZAKIAkram.
The goal of this benchmark is to help users compare and select QP solvers. Its methodology is open to discussions. The benchmark ships standard and community test sets, as well as a qpsolvers_benchmark
command-line tool to run any test set directly on your machine. For instance:
qpsolvers_benchmark maros_meszaros/maros_meszaros.py run
The outcome from running a test set is a standardized report evaluating all metrics of the benchmark across all available QP solvers. This repository also distributes results from running the benchmark on all test sets using the same computer.
New test sets are welcome. The benchmark is designed so that each test-set directory is standalone, so that the qpsolvers_benchmark
command can be run on test sets from other repositories. Feel free to create ones that better represent the kind of problems you are working on.
Thanks to @aescande, @ottapav and @ZAKIAkram for contributing to this release 👍
Added
- Allow non-lowercase solver names in the command line (thanks to @ottapav)
- Command-line tool and standalone test sets (thanks to @ZAKIAkram)
Changed
- Plot: trim solutions that don't fulfill tolerance requirements
- Rename
hist
command toplot
- Update to qpsolvers v3.4.0
v0.1.0-beta
This is the last pre-release of qpsolvers_benchmark, a benchmark for quadratic programming (QP) solvers available in Python. The main upgrade since alpha is that the benchmark now checks both residuals and the duality gap, i.e., full optimality conditions.
The goal of this pre-release is to request feedback from QP solver developers before the initial release. The initial release will in no way be the end: it will be advertised around, but the benchmark is designed to be iteratively refined by the community in the long run.
You can already help at this stage! Here are some suggestions:
- Take a look at the README and current reports produced for the Maros-Meszaros and GitHub free-for-all test sets.
- Look at the benchmark methodology and join the discussions.
- Submit a new problem, e.g. one that reflects the applications you are working on.
Added
- Check dual residual
- Check duality gap
- Document all benchmark functions
- Main script: new
hist
plot command - ProblemNotFound exception
- Results by settings in reports
- Write benchmark version in reports
Changed
- Benchmark script takes test set as first argument
- Maros-Meszaros: empty equality constraints are now set to
None
- ProxQP: re-run benchmark with ProxQP 0.3.2
- Refactor Report class and run function
- Report encoding is now UTF-8
- Switch to qpsolvers v2.7
- Test set descriptions are now mandatory
Fixed
- Conform to linter standards
- Sparse matrix conversion
v0.1.0-alpha
This is a pre-release of qpsolvers_benchmark, a benchmark for quadratic programming (QP) solvers available in Python. This benchmark is still a work in progress. The goal of this pre-release is to show the current status and gather feedback. In particular, feel free to:
- Take a look at the README and current reports produced for the Maros-Meszaros and GitHub free-for-all test sets.
- Look at the benchmark methodology and join the discussions.
- Submit a new problem, e.g. one that reflects the applications you are working on.
Towards a tool to help us compare and select QP solvers!