Faster Discrete Convex Function Minimization with Predictions: The M-Convex Case

Authors: Taihei Oki, Shinsaku Sakaue

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments in Section 5 confirm that using predictions can improve empirical computation costs.
Researcher Affiliation Academia Taihei Oki The University of Tokyo Tokyo, Japan oki@mist.i.u-tokyo.ac.jp Shinsaku Sakaue The University of Tokyo Tokyo, Japan sakaue@mist.i.u-tokyo.ac.jp
Pseudocode Yes Algorithm 1 Greedy algorithm
Open Source Code Yes The source code is available at https://github.com/ssakaue/alps-m-convex-code.
Open Datasets No We thus create a dataset of T = 100 random instances for each σ {1, 5, 10, 20}. We generate 10 such random datasets independently to calculate the mean and standard deviation of the results. We then generate T = 100 instances by perturbing parameters defining constraints and objectives with Gaussian noises multiplied by σ = 0.1, 1.0, 10.0
Dataset Splits No The paper describes generating instances and using online learning, but it does not specify explicit training, validation, and test splits for a static dataset.
Hardware Specification Yes We used Mac Book Air with Apple M2 chip, 24 GB of memory, and mac OS Ventura 13.2.1.
Software Dependencies Yes We implemented algorithms in Python 3.9.12 with libraries such as Num Py 1.23.2. We used Gurobi 10.0.1 [17]
Experiment Setup Yes We set c Y = max{P i Y i + σua, 1}, where ua follows the standard normal distribution and σ controls the noise strength. We let ℓY = min{2h + ub, R/2n h}, where h {0, 1, . . . , log n} is the height of Y in TF (a leaf Y has h = 0) and ub is drawn uniformly at random from {0, 1, . . . , 50}; the minimum with R/2n h is taken to ensure that the feasible region is non-empty. ... We learn predictions ˆxt RN for t = 1, . . . , T by using OSD on V = { x [0, R]N : x(N) = R } with a step size of 0.01 pR/n