Cooper is a toolkit for Lagrangian-based constrained optimization in Pytorch. This library aims to encourage and facilitate the study of constrained optimization problems in machine learning.
Cooper is (almost!) seamlessly integrated with Pytorch and preserves the
usual loss -> backward -> step
workflow. If you are already familiar with
Pytorch, using Cooper will be a breeze! 🙂
Cooper was born out of the need to handle constrained optimization problems for which the loss or constraints are not necessarily "nicely behaved" or "theoretically tractable", e.g. when no (efficient) projection or proximal are available. Although assumptions of this kind have enabled the development of great Pytorch-based libraries such as CHOP and GeoTorch, they are seldom satisfied in the context of many modern machine learning problems.
Many of the structural design ideas behind Cooper are heavily inspired by the TensorFlow Constrained Optimization (TFCO) library. We highly recommend TFCO for TensorFlow-based projects and will continue to integrate more of TFCO's features in future releases.
Here we consider a simple convex optimization problem to illustrate how to use Cooper. This example is inspired by this StackExchange question:
I am trying to solve the following problem using Pytorch: given a 6-sided die whose average roll is known to be 4.5, what is the maximum entropy distribution for the faces?
import torch
import cooper
class MaximumEntropy(cooper.ConstrainedMinimizationProblem):
def __init__(self, mean_constraint):
self.mean_constraint = mean_constraint
super().__init__(is_constrained=True)
def closure(self, probs):
# Verify domain of definition of the functions
assert torch.all(probs >= 0)
# Negative signed removed since we want to *maximize* the entropy
entropy = torch.sum(probs * torch.log(probs))
# Entries of p >= 0 (equiv. -p <= 0)
ineq_defect = -probs
# Equality constraints for proper normalization and mean constraint
mean = torch.sum(torch.tensor(range(1, len(probs) + 1)) * probs)
eq_defect = torch.stack([torch.sum(probs) - 1, mean - self.mean_constraint])
return cooper.CMPState(loss=entropy, eq_defect=eq_defect, ineq_defect=ineq_defect)
# Define the problem and formulation
cmp = MaximumEntropy(mean_constraint=4.5)
formulation = cooper.LagrangianFormulation(cmp)
# Define the primal parameters and optimizer
probs = torch.nn.Parameter(torch.rand(6)) # Use a 6-sided die
primal_optimizer = cooper.optim.ExtraSGD([probs], lr=3e-2, momentum=0.7)
# Define the dual optimizer. Note that this optimizer has NOT been fully instantiated
# yet. Cooper takes care of this, once it has initialized the formulation state.
dual_optimizer = cooper.optim.partial_optimizer(cooper.optim.ExtraSGD, lr=9e-3, momentum=0.7)
# Wrap the formulation and both optimizers inside a ConstrainedOptimizer
coop = cooper.ConstrainedOptimizer(formulation, primal_optimizer, dual_optimizer)
# Here is the actual training loop.
# The steps follow closely the `loss -> backward -> step` Pytorch workflow.
for iter_num in range(5000):
coop.zero_grad()
lagrangian = formulation.composite_objective(cmp.closure, probs)
formulation.custom_backward(lagrangian)
coop.step(cmp.closure, probs)
pip install git+https://github.com/cooper-org/cooper.git
First, clone the repository, navigate to the Cooper root directory and install the package in development mode by running:
Setting | Command | Notes |
---|---|---|
Development | pip install --editable ".[dev, tests]" |
Editable mode. Matches test environment. |
Docs | pip install --editable ".[docs]" |
Used to re-generate the documentation. |
Tutorials | pip install --editable ".[examples]" |
Install dependencies for running examples |
No Tests | pip install --editable . |
Editable mode, without tests. |
cooper
- base packageproblem
- abstract class for representing ConstrainedMinimizationProblems (CMPs)constrained_optimizer
-torch.optim.Optimizer
-like class for handling CMPslagrangian_formulation
- Lagrangian formulation of a CMPmultipliers
- utility class for Lagrange multipliersoptim
- aliases for Pytorch optimizers and extra-gradient versions of SGD and Adam
tests
- unit tests forcooper
componentstutorials
- source code for examples contained in the tutorial gallery
Please read our CONTRIBUTING
guide prior to submitting a pull request. We use black
for formatting, isort
for import sorting, flake8
for linting, and mypy
for type checking.
We test all pull requests. We rely on this for reviews, so please make sure any
new code is tested. Tests for cooper
go in the tests
folder in the root of
the repository.
Cooper is distributed under an MIT license, as found in the LICENSE file.
Cooper supports the use of extra-gradient style optimizers for solving the min-max Lagrangian problem. We include the implementations of the extra-gradient version of SGD and Adam by Hugo Berard.
We thank Manuel del Verme for insightful discussions during the early stages of this library.
This README follows closely the style of the NeuralCompression repository.
If you find Cooper useful in your research, please consider citing it using the snippet below:
@misc{gallegoPosada2022cooper,
author={Gallego-Posada, Jose and Ramirez, Juan},
title={{Cooper: a toolkit for Lagrangian-based constrained optimization}},
howpublished={\url{https://github.com/cooper-org/cooper}},
year={2022}
}