Python-interface Particle Sequential Convex Programming Model Predictive Control (SCP PMPC) interface.
This is non-linear dynamics finite horizon MPC solver with consensus optimization capability, support for arbitrary constraints and arbitrary cost.
pmpc
- Table of Contents
- Quick Start
- Installation
- Basic Usage
solve
Method Arguments Glossary- Advanced Usage
- Particle (consensus/contingency) optimization
- Warm-start support
- Citing This Work
Take a look at examples/simple_demo.ipynb
.
We offer precompiled binaries for Linux and MacOS (x86_64 and aarch64) via PyPI.
$ pip install pmpc
Even precompiled binaries have some startup time, so time only the second run of the solver.
Installation can be done by cloning this repository and issuing pip install .
$ git clone https://github.com/StanfordASL/pmpc.git
$ cd pmpc
$ pip install .
We require that
- you must have julia in your system
PATH
- Julia version is
1.6+
We highly recommend that
- obtain a dynamically linked Python
- which will make the
julia
Python interface startup much faster
- which will make the
- obtain Julia 1.9
- this version introduces a vastly faster time-to-first-run via better compilation utilities
Further subsections explain some optional installation steps.
The Python module uses pyjulia to be able to call Julia from Python which has a limitation in that it works much better (faster startup time) with a Python version which is not statically linked to libpython. A great workaround, obtaining a python version that is not statically linked can be easily done via pyenv.
Instructions on how to build a python version dynamically linked to libpython can be found
With a working version of pyenv
, an example might be
$ env PYTHON_CONFIGURE_OPTS="--enable-shared" pyenv install 3.9.13
$ pyenv virtualenv 3.9.13 {your venv name}
$ pyenv activate {your venv name}
A large downside of using a Julia for the core solver is that compilation needs to occur every time a new process is launched. This is exacerbated by pyjulia which does not work well with not-dynamically linked python interpreters (most commonly used).
To overcome compilation times, we can start a solver process once and the library can use that solver process repeatedly, even through script restarts.
Use
$ python3 -m pmpc.remote
to start a persistent solver process.
Next, from your script
>>> from pmpc.remote import solve # instead of `from pmpc import solve`
You can download Julia, the programming language and interpreter, from here.
Make sure Julia is in your PATH
.
The solver is capable of MPC consensus optimization for several system instantiations. For the basic usage, we'll focus on a single system MPC.
A basic MPC problem is defined using the dynamics and a quadratic cost
x0
the initial state of shape
where
np.shape(x0) == (xdim,)
-
f, fx, fu = f_fx_fu_fn(xt, ut)
an affine dynamics linearization, such that$$x^{(i+1)} \approx f^{(i)} + f_x^{(i)} (x^{(i)} - \tilde{x}^{(i)}) + f_u^{(i)} (u^{(i)} - \tilde{u}^{(i)}) $$ where
np.shape(xt) == (N, xdim)
np.shape(ut) == (N, udim)
np.shape(f) == (N, xdim)
np.shape(fx) == (N, xdim, xdim)
np.shape(fu) == (N, xdim, udim)
X_ref, Q
a reference and quadratic weight matrix for state costU_ref, R
a reference and quadratic weight matrix for control cost
The cost is given as
Note: Initial state, x0, is assumed constant and thus does not feature in the cost.
Note: When handling controls, we'll always have np.shape(U) == (N, udim)
Note: When handling states, we'll have either np.shape(X) == (N + 1, xdim)
with x0
included at the beginning or np.shape(X) == (N, xdim)
with x0
NOT INCLUDED. X[:, -1]
always refers to the state N, whereas U[:, -1]
always refers to control N - 1.
Thus, an example call would be
>>> import pmpc
>>> X, U, debug = pmpc.solve(f_fx_fu_fn, Q, R, x0, X_ref, U_ref)
>>> help(pmpc.solve)
Take a look at
tests/simple.py
for simple usagetests/dubins_car.py
for defining dynamics
The solver has two scalar hyperparamters, the dynamics linearization deviation penalty for states and controls
reg_x
- state deviation in-between SCP iterations regularizationreg_u
- control deviation in-between SCP iterations regularization
Higher values will slow evolution between SCP iterations and will require more SCP iterations to converge to a solution, but will avoid instability in the solution if the dynamics are not sufficiently smooth.
verbose
- whether to print iteration status (user-facing)debug
- whether to print very low-level debugging information (developer-facing)max_it
- maximum number of SCP iterations to perform (can be fewer if tolerance met earlier)time_limit
- the time limit in seconds for SCP iterationres_tol
- deviation tolerance past which solution is accepted (measure of convergence)slew_rate
- the quadratic penalty between time-consecutive controls (encourages smooth controls)u_slew
- the previous action taken to align the first plan action with (useful for smooth receding horizon control)
-
X_prev
- previous state solution (guess),$x^{(i)} ~~ \forall i \in [1, \dots, N]$ ,shape = (N, xdim)
-
U_prev
- previous control solution (guess),$u^{(i)} ~~ \forall i \in [0, \dots, N - 1]$ ,shape = (N, udim)
-
x_l
- state lower box constraints,$x^{(i)} ~~ \forall i \in [1, \dots, N]$ ,shape = (N, xdim)
-
x_u
- state upper box constraints,$x^{(i)} ~~ \forall i \in [1, \dots, N]$ ,shape = (N, xdim)
-
u_l
- control lower box constraints,$u^{(i)} ~~ \forall i \in [0, \dots, N - 1]$ ,shape = (N, udim)
-
u_u
- control upper box constraints,$u^{(i)} ~~ \forall i \in [0, \dots, N - 1]$ ,shape = (N, udim)
The solver supports custom arbitrary cost via each-SCP-iteration cost linearization and custom constraints via each-SCP-iteration constraint reformulation into any convex-cone constraint.
lin_cost_fn
is an optional callable which allows specifying a custom cost, it should take arbitraryX
,U
and return a tuplecx
, the linearization of the cost with respect to the state,np.shape(cx) == (N, xdim) or cx is None
cu
, the linearization of the cost with respect to the controls,np.shape(cu) == (N, udim) or cu is None
I highly recommend using an auto-diff library to produce the linearizations to avoid unnecessary bugs.
-
extra_cstrs_fns
is an optional callable which returns a list of conic constraints given arbitraryX, U
- a conic constraint
$G z - h \in \mathcal{K}$ , (e.g.,$A z - b \leq 0$ ) consists of a tuple of 8 elements-
l
- the number of non-positive orthant constraints (an integer) -
q
- a list with the sizes of second order cone constraints (list of integers) -
e
- the number of exponential cone constraints (integer) -
G_left
- the G matrix referring to existing variables -
G_right
- the G matrix for new variables to introduce -
h
- the right hand side vector for the constraint -
c_left
- the additive linear cost augmentation for existing variables -
c_right
- the linear (minimization) cost for new variables to introduce
-
- a conic constraint
Note: lin_cost_fn
is expected to return a tuple, but extra_cstrs_fns
must be a list of functions!
The variable layout is control variables (# = N udim
) followed by state variables (# = N xdim
).
For consensus optimization the layout is
- consensus controls (
# = Nc udim
) - free controls (
# = M (N - Nc) udim
) - states (
# = M N xdim
)
solver_settings
- a dictionary of settings to pass to the lower-level Julia solversolver_state
- a previous solver state to pass to the lower-level Julia solverfilter_method
- whether to apply SCP solution low-pass-like filtering to avoid SCP solution oscillationfilter_window
- how large a SCP iteration window to pick for SCP solution filteringfilter_it0
- what iteration of SCP to start applying filtering fromreturn_min_viol
- whether to return the minimum deviation solution (measured from previous), not the last onemin_viol_it0
- what iteration to start looking for minimum deviation solution
Launching multiple solver processes can be useful when many problems need to be solved. This presents some challenges
- the solver needs to be able to discover existing, ready persistent worker processes
- we use redis here
- batch of problem specifications need to be fed to a solution function
- we redefine the optimal control problem slightly as a set of keyword arguments for the
solve
function, positional arguments are now keywords arguments
- we redefine the optimal control problem slightly as a set of keyword arguments for the
Launch
$ python3 -m pmpc.remote --help
usage: remote.py [-h] [--port PORT] [--verbose] [--worker-num WORKER_NUM]
[--redis-host REDIS_HOST] [--redis-port REDIS_PORT]
[--redis-password REDIS_PASSWORD]
optional arguments:
-h, --help show this help message and exit
--port PORT, -p PORT TCP port on which to start the server
--verbose, -v
--worker-num WORKER_NUM, -n WORKER_NUM
Number of workers to start, 0 means number equal to
physical CPU cores.
--redis-host REDIS_HOST
Redis host
--redis-port REDIS_PORT
Redis port
--redis-password REDIS_PASSWORD
Redis password
$ python3 -m pmpc.remote --worker-num 0 --redis-password my_redis_password
then
from pmpc.remote import solve_problems
problems = [generate_problem(...) for _ in range(32)]
solutions = solve_problems(problems)
We solve the discovery problem by making use of redis, an in-memory key-value store (a database). You need to have a working redis-server
running (e.g., sudo apt install redis-server
).
Note: Since our worker processes are TCP based, they can be run on any network connected computer (which has the copy of the python environment). As long as we can ensure that all computers can access a redis database where workers register.
We solve the problem specification by redefining the optimal control problem slightly. All arguments to the solve
function are now keyword arguments a problem is defined as a dictionary Dict[str, Any]
.
The solve_problems
method solves a list of dictionary problem definitions and returns a list of solutions.
For consensus control, simply
- batch the system instantiations into an extra (first) dimension
- all problem data now obtains a size
M
first dimensions
- all problem data now obtains a size
- specify
Nc
as a field insolver_settings
, the length of the consensus horizon
TODO more detailed explanation
N, xdim, udim = 10, 3, 2
X_ref = np.random.rand(N, xdim)
def lin_cost_fn(X, U):
# cost is np.sum((X - X_ref) ** 2)
cx = 2 * (X - X_ref)
cu = None
return cx, cu
Arbitrary convex (cone) constraints can be introduced in a canonical form via a callaback which recomputes them at every SCP iteration. This allows to encode non-convex constraints via their SCP convexification.
The canonical form for a convex problem is given by
where
Simply
Mathematically
np.linalg.norm(A[1:, :] @ z - b[1:]) <= A[0, :].T @ z - b[0]
The exponential cone is defined as a 3 output matrix expression (the image of the cone matrix is of dimension 3). We follow the convention from JuMP.jl
for
so
The underlying convex solver can be selected by passing a composite keyword argument
sol = solve(
...,
solver_settings = dict(solver="ecos") # for example, or "cosmo", "osqp", "mosek"
)
ECOS
- FREE - very fast and very general (used by default)OSQP
- FREE - very fast, but only supports linear constraintsCOSMO
- FREE - very general, but it tends to run very slowlyMosek
- NOT FREE - extremely fast, but requires a (not free) license- requires: a Mosek license
TODO
Warm-start in SCP MPC can refer to either
- warm-starting the SCP procedure through a good
X_prev, U_prev
- this is supported - warm-starting the underlying convex solver - not supported
Warm-starting of the SCP procedure by providing a good X_prev, U_prev
guess is supported and very much encouraged for good SCP performance!
Warm-starting of the underlying convex solver is currently not supported, as it does not lead to a noticeable performance improvement on problems we tested the solver on.
@inproceedings{dyro2021particle,
author = {Dyro, Robert and Harrison, James and Sharma, Apoorva and Pavone, M.},
title = {{P}article {M}{P}{C} for {U}ncertain and {L}earning-{B}ased {C}ontrol},
booktitle = {{I}{E}{E}{E}/{R}{J}{S} {I}nternational {C}onference on {I}ntelligent {R}{O}bots and {S}ystems},
year = {2021},
}