Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Global Identifiability of Overcomplete Dictionary Learning via L1 and Volume Minimization
Authors: Yuchen Sun, Kejun Huang
ICLR 2025 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Finally, we propose an algorithm based on alternating minimization to solve the new proposed formulation. ... The proposed algorithm is applied to synthetically generated data to demonstrate that global identifiability can indeed be guaranteed via solving (2). |
| Researcher Affiliation | Academia | Yuchen Sun & Kejun Huang Department of Computer and Information Science and Engineering University of Florida Gainesville, FL 32611, USA EMAIL |
| Pseudocode | Yes | Algorithm 1 Solving (14) via alternating minimization |
| Open Source Code | No | The paper does not provide any explicit statement about releasing source code or a link to a code repository. |
| Open Datasets | No | We synthetically generate random problems with s=5, m=10, k=20, and n=200. A groundtruth sparse coefficient matrix S Rk n is generated from the sparse-Gaussian model SG(s), while the groundtruth dictionary A Rm k is simply generated from a standard normal distribution. |
| Dataset Splits | No | The paper describes generating synthetic data for demonstration but does not specify any training/test/validation splits for this data. |
| Hardware Specification | No | The paper does not mention any specific hardware used for running the experiments or demonstrations. |
| Software Dependencies | No | The paper describes an algorithm and its application to synthetic data but does not specify any software libraries or their versions used for implementation. |
| Experiment Setup | Yes | Algorithm 1 runs on X with ΜΈ= 1000 (since we want the data fidelity term to be almost zero) and Μ·= 10β5. |