Global Identifiability of $\ell_1$-based Dictionary Learning via Matrix Volume Optimization
Authors: Jingzhou Hu, Kejun Huang
NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Finally, we propose algorithms to solve the new proposed formulation, specifically one based on the linearized-ADMM with efficient per-iteration updates. The proposed algorithms exhibit surprisingly effective performance in correctly and efficiently recovering the dictionary, as demonstrated in the numerical experiments. |
| Researcher Affiliation | Academia | Jingzhou Hu Kejun Huang Department of Computer and Information Science and Engineering University of Florida Gainesville, FL 32611 (jingzhouhu,kejun.huang)@ufl.edu |
| Pseudocode | Yes | Algorithm 1 Solving (7) with L-ADMM |
| Open Source Code | No | The paper does not provide explicit statements about releasing source code or links to a code repository. |
| Open Datasets | No | For π= 20 and π= 1000, we randomly generate the groundtruth sparse coefficient matrix πΊ according to the Bernoulli-Gaussian model with π= 0.5, and the groundtruth dictionary matrix π¨ completely random, and generate the data matrix πΏ= π¨ πΊ . For a given image, it is first divided into 8 8 non-overlapping patches, reshaped into a vector in Rπwith π= 64, and stacked as columns of the data matrix πΏ. |
| Dataset Splits | No | The paper generates synthetic data and uses image patches from a natural image, but does not describe any train/validation/test dataset splits. |
| Hardware Specification | No | The paper states that experiments are conducted in MATLAB but does not provide specific details on the hardware used (e.g., CPU, GPU models, memory). |
| Software Dependencies | No | The paper mentions using MATLAB for experiments but does not provide specific version numbers for MATLAB or any other software dependencies. |
| Experiment Setup | Yes | For π= 20 and π= 1000, we randomly generate the groundtruth sparse coefficient matrix πΊ according to the Bernoulli-Gaussian model with π= 0.5, and the groundtruth dictionary matrix π¨ completely random, and generate the data matrix πΏ= π¨ πΊ . We empirically found that setting π= ππworks very well in practice. |