Optimization with constraint learning (OCL) uniquely leverages machine learning (ML) to design optimization models in which constraints and objectives are directly learned from data, when an explicit expression is unknown. While OCL offers great advantages to design more accurate models, in a faster way, practitioners should also be aware of possible pitfalls and inaccuracies arising from embedding fitted models as optimization constraints.
Divided into four parts, the OCL Lab offers theory as well as hands-on tutorials, exemplified on a case study from the World Food Program. Through the OCL Lab, participants will become familiar with two novel Python packages (1) OptiCL to learn and embed constraints and (2) DOFramework to evaluate the optimal solutions generated by an OCL algorithm. The first two parts of the lab will provide participants with theoretical and practical knowledge for using ML models to learn constraints and objectives directly from data. The remaining two parts will be dedicated to novel quality metrics for OCL and a structured testing framework for OCL algorithms.
See our OCL Lab Installation Instructions here.
Part I: OCL Introduction (60 min) [Slides]
- Intro to mathematical optimization.
- Intro to optimization with constraint learning.
Part II: OptiCL (60 min)
- World Food Programme case study. [Notebook I]
- Chemotherapy case study. [Notebook II]
Part III: Probability of Improvement (PoI) and Constraint Satisfaction (PoCS) (40 min) [Slides]
- Introduction and motivation for solution quality metrics PoI / PoCS.
- Gaussian Processes (GPs) for PoI / PoCS.
- Example. [Notebook]
Part IV: DOFramework (50 min) [Slides]
- The random generation of optimization problem instances.
- DOFramework event-driven, cloud-distributed design.
- A DOFramework experiment. [Notebook I]
- Profiling a Decision-Optimization (DO) model learner. [Notebook II]
- Mathematical Optimization
-- H. P. Williams, Model Building in Mathematical Programming, 2013. - Constraint Learning
-- A. Fajemisin, D. Maragno, and D. den Hertog, Optimization with Constraint Learning: A Framework and Survey, 2021.
-- D. Maragno, H. Wiberg, D. Bertsimas, S. I. Birbil, D. den Hertog, and A. Fajemisin, Mixed-Integer Optimization with Constraint Learning, 2021. - Gaussian Processes
-- C. E. Rasmussen and C. K. I. Williams, Gaussian Processes for Machine Learning, 2006. - DOFramework
-- O. Davidovich, G.-T. Bercea, and S. Wasserkrug, The Good, the Bad, and the Outliers: A Testing Framework for Decision Optimization Model Learning, 2022.
Ilker Birbil is a professor at the University of Amsterdam, where he is the head of the Business Analytics Department. His research interests center around optimization methods in data science and decision making. Lately, he is working on interpretable machine learning and data privacy in operations research.
Donato Maragno is a PhD candidate at the Department of Business Analytics, University of Amsterdam in the Netherlands. His research interest focuses on the investigation of different techniques to embed Machine Learning into optimization models. He is one of the developers of OptiCL, an open-source tool for optimization with constraint learning.
Orit Davidovich is an Applied Math Research Scientist at the IBM Research Haifa Lab, Israel. Orit is interested in problems that arise when decision support is subject to uncertainty, frequently stemming from available data. In addition, Orit likes tinkering with novel cloud compute frameworks.