Strong NP-Hardness for Sparse Optimization with Concave Penalty Functions

Authors: Yichen Chen, Dongdong Ge, Mengdi Wang, Zizhuo Wang, Yinyu Ye, Hao Yin

ICML 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical We prove that finding an O(nc1dc2)-optimal solution to the regularized sparse optimization problem is strongly NP-hard for any c1, c2 [0, 1) such that c1 + c2 < 1. Our main contributions are three-fold. 1. We prove the strong NP-hardness for Problem 1 with general loss functions. ... Section 6 provides a proof of the main results in a simplified setting.
Researcher Affiliation Academia 1Princeton University, NJ, USA 2Shanghai University of Finance and Economics, Shanghai, China 3University of Minnesota, MN, USA 4Stanford University, CA, USA.
Pseudocode No The paper is theoretical and focuses on mathematical proofs, such as 'Proof of Theorem 1' in Section 6. It does not include any pseudocode or algorithm blocks.
Open Source Code No The paper does not contain any statements about making source code available, nor does it provide links to a code repository. This is typical for purely theoretical papers.
Open Datasets No The paper is theoretical and does not involve empirical experiments with datasets, and thus no training dataset information is provided.
Dataset Splits No The paper is theoretical and does not involve empirical experiments, so there is no mention of training, validation, or test splits for datasets.
Hardware Specification No The paper is theoretical and does not describe any computational experiments or the hardware used to run them.
Software Dependencies No The paper is theoretical and does not describe any software dependencies with version numbers, as it does not report on computational experiments.
Experiment Setup No The paper is theoretical and focuses on mathematical proofs, not on empirical experiments. Therefore, no experimental setup details, hyperparameters, or training configurations are provided.