Understanding l4-based Dictionary Learning: Interpretation, Stability, and Robustness

Authors: Yuexiang Zhai, Hermish Mehta, Zhengyuan Zhou, Yi Ma

ICLR 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental To corroborate the theoretical analysis, we also provide extensive and compelling experimental evidence with both synthetic data and real images. and 4 SIMULATIONS AND EXPERIMENTS
Researcher Affiliation Collaboration 1Department of EECS, UC Berkeley 2Byte Dance Inc. 3Stern School of Business, NYU
Pseudocode No The paper describes the MSP algorithm using mathematical equations (e.g., equation 3) but does not present it in a structured pseudocode or algorithm block format.
Open Source Code Yes Codes are available at https://github.com/hermish/ZMZM-ICLR-2020.
Open Datasets Yes MNIST dataset (Le Cun et al., 1998) and CIFAR-10 data-set (Krizhevsky et al., 2009)
Dataset Splits No The paper mentions overall sample sizes (e.g., 'n = 50, p = 20, 000' in Figure 2 and 3) and how data matrices are constructed, but it does not specify explicit training, validation, and test dataset splits with percentages or absolute counts for its experiments.
Hardware Specification No The paper does not provide any specific hardware details such as GPU models, CPU models, or memory specifications used for running its experiments.
Software Dependencies No The paper does not provide specific ancillary software details, such as library names with version numbers, required to replicate the experiments.
Experiment Setup Yes In this simulation, we run the MSP algorithm from equation 3, using the imperfect measurements Y of different models (YN, YO, YC). As shown in Figure 2, the normalized value of W Do 4 4 /n reaches global maximum with all types of inputs when varying the level of noise, outliers, and sparse corruptions. and We run the experiments by increasing the sample size p w.r.t. the scale of imperfect measurements η, τ, β, respectively. and We run the MSP algorithm with 100 iterations on both Y and a noisy version YN, and the learned top bases are visualized in Figure 5.