Learning-Augmented Algorithms for Online Linear and Semidefinite Programming

Authors: Elena Grigorescu, Young-San Lin, Sandeep Silwal, Maoyuan Song, Samson Zhou

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental 3 Empirical Evaluations. We demonstrate the applicability of our algorithmic framework on a synthetic and real datasets. Our focus will be on online covering algorithms with fractional hints and entries. We focus on this setting since it is the simplest of our algorithms and already captures key points of the overall framework.
Researcher Affiliation Academia Elena Grigorescu Purdue University, USA elena-g@purdue.edu Young-San Lin University of Melbourne, Australia nilnamuh@gmail.com Sandeep Silwal MIT, USA silwal@mit.edu Maoyuan Song Purdue University, USA song683@purdue.edu Samson Zhou UC Berkeley and Rice University, USA samsonzhou@gmail.com
Pseudocode Yes We describe the continuous primal-dual approach in phase r and round i in Algorithms 1 and 2. ... Algorithm 1 PDLA Online Covering LP ... Algorithm 2 PDLA Online Covering SDP
Open Source Code No The paper states that experiments were 'implemented in Python 3.9' but does not provide any explicit statement about releasing the source code for the described methodology or a link to a code repository.
Open Datasets Yes Our graph dataset is constructed as follows. We have a sequence of nine (unweighted) graphs which represents an internet router network sampled across time [LK14, LKF05]6. ... Graphs can be accessed in https://snap.stanford.edu/data/Oregon-1.html
Dataset Splits No The paper describes how predictions are generated and used as 'hints' in a dynamic, online setting (e.g., 'The predictions for all instances i 1 are given by the optimal offline solution generated from the first instance A0'), and how synthetic datasets or time-varying graphs are constructed. However, it does not specify traditional train/validation/test dataset splits with percentages or sample counts.
Hardware Specification Yes All experiments are done in a 2021 M1 Macbook Pro with 32 gigabytes of RAM and implemented in Python 3.9.
Software Dependencies Yes All experiments are done in a 2021 M1 Macbook Pro with 32 gigabytes of RAM and implemented in Python 3.9.
Experiment Setup No The paper describes the construction of synthetic and graph datasets and the types of predictions used (e.g., 'noisily corrupt the entries of x by setting the entries to be 0 independently with probab. p.'). However, it does not provide specific hyperparameters for the algorithms themselves, such as learning rates, batch sizes, or optimizer settings, nor a dedicated section for experimental setup details.