Least squares¶
To solve a linear least-squares problem, simply build the matrices that define
it and call the solve_ls()
function:
from numpy import array, dot
from qpsolvers import solve_ls
R = array([[1., 2., 0.], [-8., 3., 2.], [0., 1., 1.]])
s = array([3., 2., 3.])
G = array([[1., 2., 1.], [2., 0., 1.], [-1., 2., -1.]])
h = array([3., 2., -2.]).reshape((3,))
x_sol = solve_ls(R, s, G, h, solver="osqp")
print(f"LS solution: {x_sol = }")
The backend QP solver is selected among supported solvers via the solver
keyword argument. This example outputs the
solution [0.12997217, -0.06498019, 1.74004125]
.
- qpsolvers.solve_ls(R, s, G=None, h=None, A=None, b=None, lb=None, ub=None, W=None, solver=None, initvals=None, verbose=False, sparse_conversion=None, **kwargs)¶
Solve a constrained weighted linear Least Squares problem.
The linear least squares is defined as:
\[\begin{split}\begin{split}\begin{array}{ll} \underset{x}{\mbox{minimize}} & \frac12 \| R x - s \|^2_W = \frac12 (R x - s)^T W (R x - s) \\ \mbox{subject to} & G x \leq h \\ & A x = b \\ & lb \leq x \leq ub \end{array}\end{split}\end{split}\]using the QP solver selected by the
solver
keyword argument.- Parameters:
R (
Union
[ndarray
,csc_matrix
]) – Union[np.ndarray, spa.csc_matrix] factor of the cost function (most solvers require \(R^T W R\) to be positive definite, which means \(R\) should have full row rank).s (
ndarray
) – Vector term of the cost function.G (
Union
[ndarray
,csc_matrix
,None
]) – Linear inequality matrix.h (
Optional
[ndarray
]) – Linear inequality vector.A (
Union
[ndarray
,csc_matrix
,None
]) – Linear equality matrix.b (
Optional
[ndarray
]) – Linear equality vector.lb (
Optional
[ndarray
]) – Lower bound constraint vector.ub (
Optional
[ndarray
]) – Upper bound constraint vector.W (
Union
[ndarray
,csc_matrix
,None
]) – Definite symmetric weight matrix used to define the norm of the cost function. The standard L2 norm (W = Identity) is used by default.solver (
Optional
[str
]) – Name of the QP solver, to choose inqpsolvers.available_solvers
. This argument is mandatory.initvals (
Optional
[ndarray
]) – Vector of initial x values used to warm-start the solver.verbose (
bool
) – Set to True to print out extra information.sparse_conversion (
Optional
[bool
]) – Set to True to use a sparse conversion strategy and to False to use a dense strategy. By default, the conversion strategy to follow is determined by the sparsity of \(R\) (sparse if CSC matrix, dense otherwise). See Notes below.
- Return type:
Optional
[ndarray
]- Returns:
Optimal solution if found, otherwise
None
.
Note
Some solvers (like quadprog) will require a full-rank matrix \(R\), while others (like ProxQP or QPALM) can work even when \(R\) has a non-empty nullspace.
Notes
This function implements two strategies to convert the least-squares cost \((R, s)\) to a quadratic-programming cost \((P, q)\): one that assumes \(R\) is dense, and one that assumes \(R\) is sparse. These two strategies are detailed in this note. The sparse strategy introduces extra variables :math;`y = R x` and will likely perform better on sparse problems, although this may not always be the case (for instance, it may perform worse if \(R\) has many more rows than columns).
Extra keyword arguments given to this function are forwarded to the underlying solvers. For example, OSQP has a setting eps_abs which we can provide by
solve_ls(R, s, G, h, solver='osqp', eps_abs=1e-4)
.
See the examples/
folder in the repository for more advanced use cases. For
a more general introduction you can also check out this post on least squares
in Python.