Inexact Proximal Gradient Methods for Non-Convex and Non-Smooth Optimization
Authors: Bin Gu, De Wang, Zhouyuan Huo, Heng Huang
AAAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Finally, we show the applications of our inexact proximal gradient algorithms on three representative non-convex learning problems. Empirical results confirm the superiority of our new inexact proximal gradient algorithms. |
| Researcher Affiliation | Academia | Bin Gu,1 De Wang,2 Zhouyuan Huo,1 Heng Huang1* 1Department of Electrical & Computer Engineering, University of Pittsburgh, USA 2Dept. of Computer Science and Engineering, University of Texas at Arlington, USA |
| Pseudocode | Yes | Algorithm 1 Basic inexact proximal gradient method (IPG); Algorithm 2 Accelerated inexact proximal gradient method (AIPG); Algorithm 3 Nonmonotone accelerated inexact proximal gradient method (nm AIPG) |
| Open Source Code | No | The paper does not provide any specific links to source code for the proposed methods, nor does it state that the code is available in supplementary materials or upon request. |
| Open Datasets | Yes | Table 3: The datasets used in the experiments. ... Cardiac; Coil20; Soc-sign-epinions(SSE); Soc-Epinions1(SE); Gas Sensor Array Drift(GS); Year Prediction MSD(YP). Footnotes provide URLs: "2http://www1.cs.columbia.edu/CAVE/software/softlib/coil20.php", "3http://snap.stanford.edu/data", "4https://archive.ics.uci.edu/ml/datasets.html" |
| Dataset Splits | No | The paper mentions datasets used but does not provide specific details on training, validation, or test splits (e.g., percentages or counts). |
| Hardware Specification | No | The paper states "We implement our IPG, AIPG and nm AIPG methods for robust OSCAR in Matlab" but does not specify any hardware details such as CPU, GPU, or memory used for the experiments. |
| Software Dependencies | No | The paper mentions implementation in "Matlab" but does not specify any version numbers for Matlab or any other software libraries or dependencies used. |
| Experiment Setup | No | The paper mentions some parameters like `δ = 0.6` (for nm AIPG) and states that `stepsize γ < 1/L` is either set manually or by a backtracking line-search procedure. However, it does not provide concrete values for hyperparameters like `λ1`, `λ2`, or specific values for `γ` used across the three applications, nor does it detail other training configurations common in experiments (e.g., epochs, batch size, specific optimizer settings if applicable). |