Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Gradient-Free Methods for Nonconvex Nonsmooth Stochastic Compositional Optimization
Authors: Zhuanghua Liu, Luo Luo, Bryan Kian Hsiang Low
NeurIPS 2024 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Furthermore, we conduct numerical experiments to demonstrate the effectiveness of the proposed methods. |
| Researcher Affiliation | Academia | Zhuanghua Liu Department of Computer Science, National University of Singapore CNRS@CREATE LTD, 1 Create Way, #08-01 CREATE Tower, Singapore 138602 EMAIL Luo Luo School of Data Science, Fudan University Shanghai Key Laboratory for Contemporary Applied Mathematics EMAIL Bryan Kian Hsiang Low Department of Computer Science, National University of Singapore EMAIL |
| Pseudocode | Yes | Algorithm 1: GFCOM(x0, η, T, bf, bg) ... Algorithm 2: GFCOM+(x0, η, T, bf, b f, bg, b g, m) ... Algorithm 3: WS-GFCOM(x0, η0, T0, bg,0, η, T, bf, bg, b f, b g, m) |
| Open Source Code | No | We are clearing the code with internal compliance and will release it upon approval. |
| Open Datasets | Yes | We compare all the methods on 6 different portfolio datasets formed on Size and Operating Profitability2. 2http://mba.tuck.dartmouth.edu/pages/faculty/ken.french/data_library.html |
| Dataset Splits | No | The paper does not explicitly provide details about training, validation, or test dataset splits, percentages, or methodologies for splitting. |
| Hardware Specification | No | The paper does not mention any specific hardware (e.g., GPU/CPU models, memory, or cloud instance types) used for running the experiments. |
| Software Dependencies | No | The paper does not list any specific software dependencies with version numbers. |
| Experiment Setup | Yes | We set δ = 0.1 for the GFCOM and GFCOM+ methods. ... For all algorithms, we tune the stepsize among {1 10 5, 3 10 5, . . . , 1 10 3, 3 10 3}. ... We choose the mini-batch size bf = bg = 1000. In addition, we set b f = 100, b g = 1000 and m = bf/b f = 10 for the GFCOM+ algorithm. |