Quadratic programming

Primal problem

A quadratic program is defined in standard form as:

\[\begin{split}\begin{split}\begin{array}{ll} \underset{x}{\mbox{minimize}} & \frac{1}{2} x^T P x + q^T x \\ \mbox{subject to} & G x \leq h \\ & A x = b \\ & lb \leq x \leq ub \end{array}\end{split}\end{split}\]

The vectors \(lb\) and \(ub\) can contain \(\pm \infty\) values to disable bounds on some coordinates. To solve such a problem, build the matrices that define it and call the solve_qp() function:

from numpy import array, dot
from qpsolvers import solve_qp

M = array([[1., 2., 0.], [-8., 3., 2.], [0., 1., 1.]])
P = dot(M.T, M)  # quick way to build a symmetric matrix
q = dot(array([3., 2., 3.]), M).reshape((3,))
G = array([[1., 2., 1.], [2., 0., 1.], [-1., 2., -1.]])
h = array([3., 2., -2.]).reshape((3,))
A = array([1., 1., 1.])
b = array([1.])

x = solve_qp(P, q, G, h, A, b, solver="osqp")
print(f"QP solution: x = {x}")

The backend QP solver is selected among supported solvers via the solver keyword argument. This example outputs the solution [0.30769231, -0.69230769,  1.38461538].

qpsolvers.solve_qp(P, q, G=None, h=None, A=None, b=None, lb=None, ub=None, solver=None, initvals=None, verbose=False, **kwargs)

Solve a quadratic program.

The quadratic program is defined as:

\[\begin{split}\begin{split}\begin{array}{ll} \underset{x}{\mbox{minimize}} & \frac{1}{2} x^T P x + q^T x \\ \mbox{subject to} & G x \leq h \\ & A x = b \\ & lb \leq x \leq ub \end{array}\end{split}\end{split}\]

using the QP solver selected by the solver keyword argument.

Parameters:
  • P (Union[ndarray, csc_matrix]) – Symmetric cost matrix (most solvers require it to be definite as well).

  • q (ndarray) – Cost vector.

  • G (Union[ndarray, csc_matrix, None]) – Linear inequality matrix.

  • h (Optional[ndarray]) – Linear inequality vector.

  • A (Union[ndarray, csc_matrix, None]) – Linear equality matrix.

  • b (Optional[ndarray]) – Linear equality vector.

  • lb (Optional[ndarray]) – Lower bound constraint vector. Can contain -np.inf.

  • ub (Optional[ndarray]) – Upper bound constraint vector. Can contain +np.inf.

  • solver (Optional[str]) – Name of the QP solver, to choose in qpsolvers.available_solvers. This argument is mandatory.

  • initvals (Optional[ndarray]) – Primal candidate vector \(x\) values used to warm-start the solver.

  • verbose (bool) – Set to True to print out extra information.

Note

In quadratic programming, the matrix \(P\) should be symmetric. Many solvers (including CVXOPT, OSQP and quadprog) leverage this property and may return unintended results when it is not the case. You can set project \(P\) on its symmetric part by:

P = 0.5 * (P + P.transpose())

Some solvers (like quadprog) will further require that \(P\) is definite, while other solvers (like ProxQP or QPALM) can work with semi-definite matrices.

Return type:

Optional[ndarray]

Returns:

Optimal solution if found, otherwise None.

Raises:
  • NoSolverSelected – If the solver keyword argument is not set.

  • ParamError – If any solver parameter is incorrect.

  • ProblemError – If the problem is not correctly defined. For instance, if the solver requires a definite cost matrix but the provided matrix \(P\) is not.

  • SolverError – If the solver failed during its execution.

  • SolverNotFound – If the requested solver is not in qpsolvers.available_solvers.

Notes

Extra keyword arguments given to this function are forwarded to the underlying solver. For example, we can call OSQP with a custom absolute feasibility tolerance by solve_qp(P, q, G, h, solver='osqp', eps_abs=1e-6). See the Supported solvers page for details on the parameters available to each solver.

There is no guarantee that a ValueError is raised if the provided problem is non-convex, as some solvers don’t check for this. Rather, if the problem is non-convex and the solver fails because of that, then a ValueError will be raised.

See the examples/ folder in the repository for more use cases. For a more general introduction you can also check out this post on quadratic programming in Python.

Problem class

Alternatively, we can define the matrices and vectors using the Problem class:

class qpsolvers.problem.Problem(P, q, G=None, h=None, A=None, b=None, lb=None, ub=None)

Data structure describing a quadratic program.

The quadratic program is defined as:

\[\begin{split}\begin{split}\begin{array}{ll} \underset{x}{\mbox{minimize}} & \frac{1}{2} x^T P x + q^T x \\ \mbox{subject to} & G x \leq h \\ & A x = b \\ & lb \leq x \leq ub \end{array}\end{split}\end{split}\]

This class provides sanity checks and metrics such as the condition number of a problem.

P

Symmetric cost matrix (most solvers require it to be definite as well).

q

Cost vector.

G

Linear inequality matrix.

h

Linear inequality vector.

A

Linear equality matrix.

b

Linear equality vector.

lb

Lower bound constraint vector. Can contain -np.inf.

ub

Upper bound constraint vector. Can contain +np.inf.

check_constraints()

Check that problem constraints are properly specified.

Raises:

ProblemError – If the constraints are not properly defined.

cond(active_set)

Condition number of the problem matrix.

Compute the condition number of the symmetric matrix representing the problem data:

\[\begin{split}M = \begin{bmatrix} P & G_{act}^T & A_{act}^T \\ G_{act} & 0 & 0 \\ A_{act} & 0 & 0 \end{bmatrix}\end{split}\]

where \(G_{act}\) and \(A_{act}\) denote the active inequality and equality constraints at the optimum of the problem.

Parameters:

active_set (ActiveSet) – Active set to evaluate the condition number with. It should contain the set of active constraints at the optimum of the problem.

Return type:

float

Returns:

Condition number of the problem.

Raises:

ProblemError : – If the problem is sparse.

Notes

Having a low condition number (say, less than 1e10) condition number is strongly tied to the capacity of numerical solvers to solve a problem. This is the motivation for preconditioning, as detailed for instance in Section 5 of [Stellato2020].

get_cute_classification(interest)

Get the CUTE classification string of the problem.

Parameters:

interest (str) – Either ‘A’, ‘M’ or ‘R’: ‘A’ if the problem is academic, that is, has been constructed specifically by researchers to test one or more algorithms; ‘M’ if the problem is part of a modelling exercise where the actual value of the solution is not used in a genuine practical application; and ‘R’ if the problem’s solution is (or has been) actually used in a real application for purposes other than testing algorithms.

Return type:

str

Returns:

CUTE classification string of the problem

Notes

Check out the CUTE classification scheme for details.

property has_sparse: bool

Check whether the problem has sparse matrices.

Returns:

True if at least one of the \(P\), \(G\) or \(A\) matrices is sparse.

property is_unconstrained: bool

Check whether the problem has any constraint.

Returns:

True if the problem has at least one constraint.

static load(file)

Load problem from file.

Parameters:

file (file-like object, string, or pathlib.Path) – The file to read. File-like objects must support the seek() and read() methods and must always be opened in binary mode. Pickled files require that the file-like object support the readline() method as well.

save(file)

Save problem to file.

Parameters:

file (str or file) – Either the filename (string) or an open file (file-like object) where the data will be saved. If file is a string or a Path, the .npz extension will be appended to the filename if it is not already there.

Return type:

None

unpack()

Get problem matrices as a tuple.

Return type:

Tuple[Union[ndarray, csc_matrix], ndarray, Union[ndarray, csc_matrix, None], Optional[ndarray], Union[ndarray, csc_matrix, None], Optional[ndarray], Optional[ndarray], Optional[ndarray]]

Returns:

Tuple (P, q, G, h, A, b, lb, ub) of problem matrices.

The solve function corresponding to Problem is solve_problem() rather than solve_qp().

Dual multipliers

The dual of the quadratic program defined above can be written as:

\[\begin{split}\begin{split}\begin{array}{ll} \underset{x, z, y, z_{\mathit{box}}}{\mbox{maximize}} & -\frac{1}{2} x^T P x - h^T z - b^T y - lb^T z_{\mathit{box}}^- - ub^T z_{\mathit{box}}^+ \\ \mbox{subject to} & P x + G^T z + A^T y + z_{\mathit{box}} + q = 0 \\ & z \geq 0 \end{array}\end{split}\end{split}\]

were \(v^- = \min(v, 0)\) and \(v^+ = \max(v, 0)\). To solve both a problem and its dual, getting a full primal-dual solution \((x^*, z^*, y^*, z_\mathit{box}^*)\), build a Problem and call the solve_problem() function:

import numpy as np
from qpsolvers import Problem, solve_problem

M = np.array([[1., 2., 0.], [-8., 3., 2.], [0., 1., 1.]])
P = M.T.dot(M)  # quick way to build a symmetric matrix
q = np.array([3., 2., 3.]).dot(M).reshape((3,))
G = np.array([[1., 2., 1.], [2., 0., 1.], [-1., 2., -1.]])
h = np.array([3., 2., -2.]).reshape((3,))
A = np.array([1., 1., 1.])
b = np.array([1.])
lb = -0.6 * np.ones(3)
ub = +0.7 * np.ones(3)

problem = Problem(P, q, G, h, A, b, lb, ub)
solution = solve_problem(problem, solver="proxqp")

print(f"Primal: x = {solution.x}")
print(f"Dual (Gx <= h): z = {solution.z}")
print(f"Dual (Ax == b): y = {solution.y}")
print(f"Dual (lb <= x <= ub): z_box = {solution.z_box}")

The function returns a Solution with both primal and dual vectors. This example outputs the following solution:

Primal: x = [ 0.63333169 -0.33333307  0.70000137]
Dual (Gx <= h): z = [0.         0.         7.66660538]
Dual (Ax == b): y = [-16.63326017]
Dual (lb <= x <= ub): z_box = [ 0.          0.         26.26649724]
qpsolvers.solve_problem(problem, solver, initvals=None, verbose=False, **kwargs)

Solve a quadratic program using a given solver.

Parameters:
  • problem (Problem) – Quadratic program to solve.

  • solver (str) – Name of the solver, to choose in qpsolvers.available_solvers.

  • initvals (Optional[ndarray]) – Primal candidate vector \(x\) values used to warm-start the solver.

  • verbose (bool) – Set to True to print out extra information.

Note

In quadratic programming, the matrix \(P\) should be symmetric. Many solvers (including CVXOPT, OSQP and quadprog) assume this is the case and may return unintended results when the provided matrix is not. Thus, make sure you matrix is indeed symmetric before calling this function, for instance by projecting it on its symmetric part \(S = \frac{1}{2} (P + P^T)\).

Return type:

Solution

Returns:

Solution found by the solver, if any, along with solver-specific return values.

Raises:
  • SolverNotFound – If the requested solver is not in qpsolvers.available_solvers.

  • ValueError – If the problem is not correctly defined. For instance, if the solver requires a definite cost matrix but the provided matrix \(P\) is not.

Notes

Extra keyword arguments given to this function are forwarded to the underlying solver. For example, we can call OSQP with a custom absolute feasibility tolerance by solve_problem(problem, solver='osqp', eps_abs=1e-6). See the Supported solvers page for details on the parameters available to each solver.

There is no guarantee that a ValueError is raised if the provided problem is non-convex, as some solvers don’t check for this. Rather, if the problem is non-convex and the solver fails because of that, then a ValueError will be raised.

See the examples/ folder in the repository for more use cases. For an introduction to dual multipliers you can also check out this post on optimality conditions and numerical tolerances in QP solvers.

Optimality of a solution

The Solution class describes the solution found by a solver to a given problem. It is linked to the corresponding Problem, which it can use for instance to check residuals. We can for instance check the optimality of the solution returned by a solver with:

import numpy as np
from qpsolvers import Problem, solve_problem

M = np.array([[1., 2., 0.], [-8., 3., 2.], [0., 1., 1.]])
P = M.T.dot(M)  # quick way to build a symmetric matrix
q = np.array([3., 2., 3.]).dot(M).reshape((3,))
G = np.array([[1., 2., 1.], [2., 0., 1.], [-1., 2., -1.]])
h = np.array([3., 2., -2.]).reshape((3,))
A = np.array([1., 1., 1.])
b = np.array([1.])
lb = -0.6 * np.ones(3)
ub = +0.7 * np.ones(3)

problem = Problem(P, q, G, h, A, b, lb, ub)
solution = solve_problem(problem, solver="qpalm")

print(f"- Solution is{'' if solution.is_optimal(1e-8) else ' NOT'} optimal")
print(f"- Primal residual: {solution.primal_residual():.1e}")
print(f"- Dual residual: {solution.dual_residual():.1e}")
print(f"- Duality gap: {solution.duality_gap():.1e}")

This example prints:

- Solution is optimal
- Primal residual: 1.1e-16
- Dual residual: 1.4e-14
- Duality gap: 0.0e+00

You can check out [tolerances] for an overview of optimality conditions and why a solution is optimal if and only if these three residuals are zero.

class qpsolvers.solution.Solution(problem, extras=<factory>, found=None, obj=None, x=None, y=None, z=None, z_box=None)

Solution returned by a QP solver for a given problem.

extras

Other outputs, specific to each solver.

found

True if the solution was found successfully by a solver, False if the solver did not find a solution or detected an unfeasible problem, None if no solver was run.

problem

Quadratic program the solution corresponds to.

obj

Value of the primal objective at the solution (None if no solution was found).

x

Solution vector for the primal quadratic program (None if no solution was found).

y

Dual multipliers for equality constraints (None if no solution was found, or if there is no equality constraint). The dimension of \(y\) is equal to the number of equality constraints. The values \(y_i\) can be either positive or negative.

z

Dual multipliers for linear inequality constraints (None if no solution was found, or if there is no inequality constraint). The dimension of \(z\) is equal to the number of inequalities. The value \(z_i\) for inequality \(i\) is always positive.

  • If \(z_i > 0\), the inequality is active at the solution: \(G_i x = h_i\).

  • If \(z_i = 0\), the inequality is inactive at the solution: \(G_i x < h_i\).

z_box

Dual multipliers for box inequality constraints (None if no solution was found, or if there is no box inequality). The sign of \(z_{box,i}\) depends on the active bound:

  • If \(z_{box,i} < 0\), then the lower bound \(lb_i = x_i\) is active at the solution.

  • If \(z_{box,i} = 0\), then neither the lower nor the upper bound are active and \(lb_i < x_i < ub_i\).

  • If \(z_{box,i} > 0\), then the upper bound \(x_i = ub_i\) is active at the solution.

dual_residual()

Compute the dual residual of the solution.

The dual residual is:

\[r_d := \| P x + q + A^T y + G^T z + z_{box} \|_\infty\]
Return type:

float

Returns:

Dual residual if it is defined, np.inf otherwise.

Notes

See for instance [tolerances] for an overview of optimality conditions and why this residual will be zero at the optimum.

duality_gap()

Compute the duality gap of the solution.

The duality gap is:

\[r_g := | x^T P x + q^T x + b^T y + h^T z + lb^T z_{box}^- + ub^T z_{box}^+ |\]

were \(v^- = \min(v, 0)\) and \(v^+ = \max(v, 0)\).

Return type:

float

Returns:

Duality gap if it is defined, np.inf otherwise.

Notes

See for instance [tolerances] for an overview of optimality conditions and why this gap will be zero at the optimum.

is_optimal(eps_abs)

Check whether the solution is indeed optimal.

Parameters:

eps_abs (float) – Absolute tolerance for the primal residual, dual residual and duality gap.

Return type:

bool

Notes

See for instance [tolerances] for an overview of optimality conditions in quadratic programming.

primal_residual()

Compute the primal residual of the solution.

The primal residual is:

\[r_p := \max(\| A x - b \|_\infty, [G x - h]^+, [lb - x]^+, [x - ub]^+)\]

were \(v^- = \min(v, 0)\) and \(v^+ = \max(v, 0)\).

Return type:

float

Returns:

Primal residual if it is defined, np.inf otherwise.

Notes

See for instance [tolerances] for an overview of optimality conditions and why this residual will be zero at the optimum.