Penalty-based Methods for Simple Bilevel Optimization under Hölderian Error Bounds

Authors: Pengyu Chen, Xu Shi, Rujun Jiang, Jiulin Wang

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Numerical experiments demonstrate the effectiveness of our algorithms.
Researcher Affiliation Academia Pengyu Chen School of Data Science Fudan University pychen22@m.fudan.edu.cn Xu Shi School of Data Science Fudan University xshi22@m.fudan.edu.cn Rujun Jiang School of Data Science Fudan University rjjiang@fudan.edu.cn Jiulin Wang School of Data Science Fudan University wangjiulin@fudan.edu.cn
Pseudocode Yes Algorithm 1 Penalty-based APG (PB-APG)
Open Source Code Yes Does the paper provide open access to the data and code, with sufficient instructions to faithfully reproduce the main experimental results, as described in supplemental material? Answer: [Yes] Justification: Please refer to Section 4 and Appendix F and the supplemental material.
Open Datasets Yes We conduct the first experiment using the a1a.t data from LIBSVM datasets5. and In the second experiment, we address the problem of least squares regression using the Year Prediction MSD data from the UCI Machine Learning Repository6.
Dataset Splits No For this experiment, a sample of 1, 000 instances is taken from the data, denoted as A. The corresponding labels for these instances are denoted as b, where each label bi is either 1 or 1, corresponding to the i-th instance ai. and For this experiment, a sample of m = 1, 000 songs is taken from the data, and the feature matrix and release years vector are denoted as A and b, respectively. The paper describes sampling from datasets but does not specify training, validation, or test splits.
Hardware Specification Yes All simulations are implemented using MATLAB R2023a on a PC running Windows 11 with an AMD (R) Ryzen (TM) R7-7840H CPU (3.80GHz) and 16GB RAM.
Software Dependencies Yes All simulations are implemented using MATLAB R2023a
Experiment Setup Yes For the PB-APG and PB-APG-sc algorithms, we set the value of γ = 105, and we terminate the algorithms when xk+1 xk 10 10. For the a PB-APG and a PB-APG-sc algorithms, we set γ0 = 1 25 , ν = 20, η = 10, and ϵ0 = 10 6. The iterations of these two algorithms continue until ϵk reaches 10 10 (meanwhile, γ = 105).