lqp_py (learning quadratic programs) is a Python package for efficiently solving medium to large scale batch box-constrained quadratic programs. The QP solver is implemented as a custom PyTorch module. The forward solver invokes a basic implementation of the ADMM algorithm. Backward differentiation is performed by implicit differentiation of a fixed-point mapping customized to the ADMM algorithm.
For more information please see our publication:
Computational Optimization and Applications
To use the ADMM solver you will need to install numpy and Pytorch. If you want to invoke the SCS solver then you will need to install scs and SciPy.
Please see requirements.txt for full build details.
The demo directory contains simple demos for forward solving and backward differentiating through the ADMM solver. The following runtime experiments are available in the experiments directory.
All experiments are conducted on an Apple Mac Pro computer (2.6 GHz 6-Core Intel Core i7,32 GB 2667 MHz DDR4) running macOS ‘Monterey’.
Runtime: dz = 10 | Runtime: dz = 50 |
---|---|
![]() |
![]() |
Runtime: dz = 100 | Runtime: dz = 250 |
---|---|
![]() |
![]() |
Runtime: dz = 500 | Runtime: dz = 1000 |
---|---|
![]() |
![]() |
Computational performance of ADMM FP, ADMM KKT, ADMM Unroll, Optnet and SCS for various problem sizes (dz). Batch size = 128 and stopping tolerance = 10^-5.
Runtime: dz = 500 | Convergence: dz = 500 |
---|---|
![]() |
![]() |
Training loss and computational performance for learning p. Problem setup: dz = 500, batch size = 32, epochs = 100, and stopping tolerance = 10^-5.
-
Most importantly, the lqp ADMM solver is capabale of only solving Quadratic Programs (QP) with linear equality constraints and box constraints. If your QP contains more general constraints then we recommend qpth or cvxpylayers.
-
Computational performance may differ from exactly what is reported in the paper. In our experience, computational speed seems to be dependent on the hardware, language (Python or R) and package versions used for computations. Importantly, for large problems (number of decision variables > 500 and batch size > 64) the RAM consumption will cause a considerable slowdown. Making the forward and backward routines more memory efficient is a high priority.
-
When using ADMM solver, it is important that the QP variables are properly scaled and that the ADMM parameter (rho) is set appropriately. The ADMM solver now supports automatic scaling and parameter selection.
scale=True
: automatic scaling of problem data.rho=None
: automatic rho selection.adaptive_rho=True
: dynamically tune rho based on primal/dual residuals.
Automatic scaling and parameter selection is a relatively new feature. If convergence is slow then we recommend that the user experiment with the scale of the QP input variables and manually scale the data for their particular use-case.
- Convergence of ADMM (and first-older solvers in general) can slow down if the matrix Q is rank deficient. We are currently exploring acceleration methods to improve convergence for rank deficient Q cases.