Solving Linear Programs with Fast Online Learning Algorithms

Authors: Wenzhi Gao, Dongdong Ge, Chunlin Sun, Yinyu Ye

ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Numerical experiments demonstrate that our methods can serve as either an approximate direct solver, or an initialization subroutine for exact LP solving.
Researcher Affiliation Academia 1School of Information Management and Engineering, Shanghai University of Finance and Economics 2Institute for Computational and Mathematical Engineering, Stanford University 3Management Science & Engineering, Stanford University.
Pseudocode Yes Algorithm 1 Fast online algorithms for LP; Algorithm 2 Online algorithm with duplication
Open Source Code No The paper does not provide an explicit statement about releasing its source code for the described methodology or a link to a repository.
Open Datasets Yes We use synthetic data from multi-knapsack benchmark. More detailedly, we generate benchmark multi-knapsack problems max Ax b,0 x 1 c, x as discussed in (Chu and Beasley, 1998)... Besides, we collect 13 instances from (Mittelmann, 2022).
Dataset Splits No The paper describes dataset generation parameters and setup for experiments but does not explicitly provide training, validation, or test dataset splits.
Hardware Specification No The paper does not provide specific details about the hardware used for running experiments.
Software Dependencies Yes We adopt the sifting solver in CPLEX 12.10 as our benchmark solver. ... Table 6. Time of solving MIPLIB instances to an 1e-03 relative accuracy solution and comparison with Gurobi v9.5.
Experiment Setup Yes Testing Configuration and Setup ... 2). Initial Point. We let online algorithms start from 0. ... 4). Duplication. We allow K {1, 2, 4, 8, 16, 32} 5). Stepsize. We take γ = 1/ sqrt(Kmn). ... 7). Dual Stabilization. We implement a basic dual stabilization procedure which takes α = 0.4 in (8).