A Feasible Level Proximal Point Method for Nonconvex Sparse Constrained Optimization

Authors: Digvijay Boob, Qi Deng, Guanghui Lan, Yilin Wang

NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We perform numerical experiments to demonstrate the effectiveness of our new model and efficiency of the proposed algorithm for large scale problems.
Researcher Affiliation Academia Digvijay Boob Southern Methodist University Dallas, TX dboob@smu.edu Qi Deng Shanghai university of Finance & Economics Shanghai, China qideng@sufe.edu.cn Guanghui Lan Georgia Tech Atlanta, GA george.lan@isye.gatech.edu Yilin Wang Shanghai university of Finance & Economics Shanghai, China 2017110765@live.sufe.edu.cn
Pseudocode Yes Algorithm 1 Level constrained proximal point (LCPP) method
Open Source Code No The paper mentions that DCCP ([27]) is an 'open-source package' and that they 'replicate their setup in our own implementation'. However, there is no explicit statement or link indicating that the authors' own code for the described methodology is open-source or publicly available.
Open Datasets Yes Details of the testing datasets are summarized in Table 3. Datasets: real-sim, rcv1.binary, mnist, gisette, E2006-tfidf, Year Prediction MSD. These are widely recognized benchmark datasets used in machine learning.
Dataset Splits Yes real-sim is randomly partitioned into 70% training data and 30% testing data.
Hardware Specification No The paper does not provide any specific details about the hardware used to run the experiments, such as CPU/GPU models, memory, or cloud computing instances.
Software Dependencies No The paper mentions using 'Sklearn' for Lasso, 'GIST' for MCP regularized problems, 'DCCP', 'MOSEK', and 'CVX'. However, no version numbers are provided for any of these software components or libraries.
Experiment Setup Yes To fix the parameters, we choose γ 10^-5 for gisette dataset and γ 10^-4 for the other datasets. For each LCPP subproblem we run gradient descent at most 10 iterations and break when the criterion ||xk - xk-1|| / ||xk|| <= ε is met. We set the number of outer loops as 1000 to run LCPP sufficiently long. We set λ 2, θ 0.25 in the MCP function. For comparison, both GIST and LCPP set λ 2 and θ 5 in MCP function, and set the maximum iteration number as 2000 for all the algorithms.