cuPDLP is now available in COPT 7.1!
Code for solving LP on GPU using the first-order algorithm -- PDLP.
This is the C implementation of the Julia version cuPDLP.jl.
We use CMAKE to build CUPDLP. The current version switches to HiGHS project.
Please compile with HiGHS 1.6.0 and CUDA 12.3.
Note that if you install HiGHS using the precompiled binaries, the compressed MPS files cannot be read. You can build and install with the zlib support from source, see this page to find out more. Once you setup HiGHS and CUDA, set the following environment variables.
export HIGHS_HOME=/path-to-highs
export CUDA_HOME=/path-to-cuda
For example, if HiGHS 1.6.0 has been installed with its default configuration so that the binaries are available as /usr/local/lib/libhighs.so.1.6.0
with headers in /usr/local/include/highs
, then HIGHS_HOME
should be set to /usr/local
.
Similarly, if the CUDA toolkit is installed in /usr/local/cuda-12.3
, then CUDA_HOME
should be /usr/local/cuda-12.3
.
By setting -DBUILD_CUDA=ON
(by default OFF, i.e., the CPU version), you have the GPU version of cuPDLP-C.
Examples
- use the debug mode:
mkdir build
cd build
cmake -DCMAKE_BUILD_TYPE=Debug -DBUILD_CUDA=ON ..
cmake --build . --target plc
then you can find the binary plc
in the folder <cuPDLP-C>/build/bin/
.
- when using the release mode, we suggest the following options,
cmake -DBUILD_CUDA=ON \
-DCMAKE_C_FLAGS_RELEASE="-O2 -DNDEBUG" \
-DCMAKE_CXX_FLAGS_RELEASE="-O2 -DNDEBUG" \
-DCMAKE_CUDA_FLAGS_RELEASE="-O2 -DNDEBUG" ..
If you wish to use the Python interface, use the following steps:
git submodule update --init --recursive
then build the target pycupdlp
cmake --build . --target pycupdlp
(Optional) You may checkout the setup scripts under pycupdlp
.
Usage example: set nIterLim
to 5000
and solve.
./bin/plc -fname <mpsfile> -nIterLim 5000
For the helper: use -h
.
./bin/plc -h
or
./bin/plc <something> -h <something>
Param | Type | Range | Default | Description |
---|---|---|---|---|
fname |
str |
|
|
.mps file of the LP instance |
out |
str |
|
./solution-sum.json |
.json file to save result |
outSol |
str |
|
./solution.json |
.json file to save result |
savesol |
bool |
true, false |
false |
whether to write solution to .json output |
ifScaling |
bool |
true, false |
true |
Whether to use scaling |
ifRuizScaling |
bool |
true, false |
true |
Whether to use Ruiz scaling (10 times) |
ifL2Scaling |
bool |
true, false |
false |
Whether to use L2 scaling |
ifPcScaling |
bool |
true, false |
true |
Whether to use Pock-Chambolle scaling |
nIterLim |
int |
>=0 |
INT_MAX |
Maximum iteration number |
eLineSearchMethod |
int |
0, 2 |
2 |
Choose line search: 0-fixed, |
dPrimalTol |
double |
>=0 |
1e-4 |
Primal feasibility tolerance for termination |
dDualTol |
double |
>=0 |
1e-4 |
Dual feasibility tolerance for termination |
dGapTol |
double |
>=0 |
1e-4 |
Duality gap tolerance for termination |
dTimeLim |
double |
>=0 |
3600 |
Time limit (in seconds) |
eRestartMethod |
int |
0-1 |
1 |
Choose restart: 0-none, 1-KKTversion |
dFeasTol |
double |
>=0 |
1e-8 |
Tolerance for primal and dual infeasibility check |
Consider the generic linear programming problem:
Equivalently, we solve the following saddle-point problem,
where dual variables
Primal-Dual Hybrid Gradient (PDHG) algorithm takes the step as follows,
The termination criteria contain the primal feasibility, dual feasibility, and duality gap.
where
Dongdong Ge, Haodong Hu, Qi Huangfu, Jinsong Liu, Tianhao Liu, Haihao Lu, Jinwen Yang, Yinyu Ye, Chuwen Zhang
- Jinsong Liu <github.com/JinsongLiu6>
- Tianhao Liu <github.com/SkyLiu0>
- Chuwen Zhang <github.com/bzhangcw>