Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

Nonconvex Theory of $M$-estimators with Decomposable Regularizers

Authors: Weiwei Liu

ICML 2025 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical This paper advances the theory of regularized M-estimator with decomposable regularizers from convex to nonconvex. Our main results show that the estimation errors still lie in a restricted set and we can recover the convergence rates of the estimation error when the loss function is nonconvex. Moreover, we apply our main results to two nonconvex models: corrected linear regression and โ„“1-penalized Lasso estimator. Our key technical analysis of two examples is to prove that with high probability, a form of the restricted strong convexity (RSC) condition and dual norm bound hold.
Researcher Affiliation Academia 1School of Computer Science, National Engineering Research Center for Multimedia Software, Institute of Arti๏ฌcial Intelligence and Hubei Key Laboratory of Multimedia and Network Communication Engineering, Wuhan University, Wuhan, China. Correspondence to: Weiwei Liu <EMAIL>.
Pseudocode No The paper presents theoretical proofs and theorems, but does not include any explicitly labeled pseudocode or algorithm blocks. The methods are described mathematically.
Open Source Code No The paper does not contain any explicit statements about releasing source code, nor does it provide links to a code repository or mention code in supplementary materials.
Open Datasets No The paper uses theoretical models such as 'corrupted linear model with additive noise' and 'Gaussian linear model' with assumptions like 'i.i.d. observations' and 'random matrices X, W Rn d are sub-Gaussian'. It does not refer to or provide access information for any specific public or open datasets.
Dataset Splits No The paper focuses on theoretical analysis and does not use specific empirical datasets, therefore, there is no mention of dataset splits (e.g., training, validation, test splits).
Hardware Specification No The paper is theoretical in nature and does not describe any experimental setup or the specific hardware used to conduct experiments.
Software Dependencies No The paper is a theoretical work focusing on mathematical proofs and doesn't describe any software implementations or dependencies with version numbers.
Experiment Setup No The paper is theoretical and presents mathematical proofs and derivations. It does not include details on experimental setup, hyperparameters, or training configurations.