Generalization Bound and Learning Methods for Data-Driven Projections in Linear Programming
Authors: Shinsaku Sakaue, Taihei Oki
NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments demonstrate that learning projection matrices from data is indeed beneficial: it leads to significantly higher solution quality than the existing random projection while greatly reducing the time for solving LPs. |
| Researcher Affiliation | Academia | Shinsaku Sakaue The University of Tokyo and RIKEN AIP Tokyo, Japan sakaue@mist.i.u-tokyo.ac.jp Taihei Oki Hokkaido University Hokkaido, Japan oki@icredd.hokudai.ac.jp |
| Pseudocode | No | The paper describes algorithms and methods but does not provide structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | The source code is available at https://github.com/ssakaue/data-driven-projection-lp-code. |
| Open Datasets | Yes | We used five LPs in Netlib [15]... The dataset used in this study is freely available at https://www.netlib.org/lp/data/. |
| Dataset Splits | No | each of which consists of 300 LP instances (200 for training and 100 for testing). |
| Hardware Specification | Yes | We used Mac Book Air with Apple M2 chip, 24 GB of memory, and mac OS Sonoma 14.1. |
| Software Dependencies | Yes | We implemented algorithms in Python 3.9.7 with Num Py 1.23.2. We used Gurobi 10.0.1 [26] for solving LPs and computing projection in (6). |
| Experiment Setup | Yes | We initialized P with that obtained by PCA and conducted a single epoch of training, setting the learning rate to 0.01. For Col Rand, PCA, and SGA, we used increasing values of the reduced dimensionality, k = n/100, . . . , until it reached the maximum value no more than n/10, i.e., up to 10% of the original dimensionality. PCA and SGA learned projection matrices P from N = 200 training instances |