Stochastic $L^\natural$-convex Function Minimization

Authors: Haixiang Zhang, Zeyu Zheng, Javad Lavaei

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Numerical experiments are implemented to demonstrate the efficiency of our theoretical findings.
Researcher Affiliation Academia Haixiang Zhang Department of Mathematics University of California, Berkeley Berkeley, CA 94704 haixiang_zhang@berkeley.edu Zeyu Zheng Department of IEOR University of California, Berkeley Berkeley, CA 94704 zyzheng@berkeley.edu Javad Lavaei Department of IEOR University of California, Berkeley Berkeley, CA 94704 lavaei@berkeley.edu
Pseudocode Yes Algorithm 1 Rounding process to a feasible solution Algorithm 2 Projected and truncated SSGD method for the PAC guarantee
Open Source Code No The paper does not contain any explicit statements or links indicating that the source code for the described methodology is publicly available.
Open Datasets No The paper states that numerical experiments were implemented on a 'staffing allocation stochastic optimization problem' and by constructing a 'separable stochastic convex function'. It does not refer to or provide access information for any publicly available datasets.
Dataset Splits No The paper does not provide specific details on training, validation, or test dataset splits (e.g., percentages, sample counts, or citations to predefined splits).
Hardware Specification No The paper does not specify any details about the hardware (e.g., GPU/CPU models, memory) used for running the experiments.
Software Dependencies No The paper does not provide specific version numbers for any software or libraries used in the experiments (e.g., 'Python 3.8', 'PyTorch 1.9').
Experiment Setup No The paper refers to 'Details about the problem setup are provided in the appendix' and 'More details about the experiment setup are provided in the appendix'. However, these specific details, such as hyperparameter values or training configurations, are not present in the main text of the paper.