Stochastic Gauss-Newton Algorithms for Nonconvex Compositional Optimization
Authors: Quoc Tran-Dinh, Nhan Pham, Lam Nguyen
ICML 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Finally, we illustrate our theoretical results via two numerical examples on both synthetic and real datasets. |
| Researcher Affiliation | Collaboration | 1Department of Statistics and Operations Research, The University of North Carolina at Chapel Hill, NC, USA. 2IBM Research, Thomas J. Watson Research Center, NY, USA. |
| Pseudocode | Yes | Algorithm 1 (Inexact Gauss-Newton (i GN)) |
| Open Source Code | No | The paper does not provide an explicit statement or link for the open-source code of the methodology described within the paper. |
| Open Datasets | Yes | We test three algorithms on four standard datasets: w8a, ijcnn1, covtype, and url combined from LIBSVM2. Further information about these dataset is described in Supp. Doc. F. Available online at https://www.csie.ntu.edu.tw/ cjlin/libsvm/ |
| Dataset Splits | No | The paper mentions using standard datasets but does not explicitly provide details about training, validation, or test splits in the main text. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., CPU/GPU models, memory) used for running the experiments. |
| Software Dependencies | No | The paper mentions using 'LIBSVM2' and 'Python codes' but does not provide specific version numbers for these or other software dependencies. |
| Experiment Setup | Yes | We use M := 1 and ρ := 1 for all datasets. We choose M := 1 and ρ := 1 for all datasets. We tune the learning rate for both N-SPIDER and SCGD and finally obtain η := 1.0 for both algorithms. We also set ε = 10 1 for N-SPIDER... |