On the Identifiability of Sparse ICA without Assuming Non-Gaussianity

Authors: Ignavier Ng, Yujia Zheng, Xinshuai Dong, Kun Zhang

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental To empirically validate our proposed identifiability results, we carry out experiments under various settings. We also conduct ablation studies to verify the necessity of the proposed assumptions and include Fast ICA [23] as a representative baseline.
Researcher Affiliation Academia Ignavier Ng 1, Yujia Zheng 1, Xinshuai Dong1, Kun Zhang1,2 1 Carnegie Mellon University 2 Mohamed bin Zayed University of Artificial Intelligence {ignavierng, yujiazh, dongxinshuai, kunz1}@cmu.edu
Pseudocode Yes Algorithm 1 Decomposition-Based Method
Open Source Code No The paper does not provide concrete access to source code for the methodology described. It mentions using a third-party implementation (L-BFGS from Sci Py) but no link or statement about their own code release.
Open Datasets No The paper states it "simulate[s] 10 sources" and generates data parameters randomly. There is no concrete access information (link, DOI, repository, or formal citation) for this simulated data to be publicly available.
Dataset Splits No The paper discusses experiments with 'different sample sizes' and '1000 samples' but does not specify exact training, validation, or test dataset splits.
Hardware Specification No The paper states: 'We run each of the experiments on 12 CPUs and 8 GBs of memory.' This provides a general count for CPUs and memory, but lacks specific model numbers or detailed computer specifications for reproducibility.
Software Dependencies Yes In our experiments, we use the L-BFGS algorithm [11] implemented in Sci Py [44] to solve each unconstrained optimization problem of quadartic penalty method.
Experiment Setup Yes For the sparsity term ρ(A), we use MCP with hyperparameters λ = 1, α = 40 and λ = 0.1, α = 10 for decomposition-based and likelihood-based methods, respectively. For quadratic penalty method, we use c1 = 10 5 and c1 = 10 2 for decomposition-based and likelihood-based methods, respectively, and use β = 1.5 for both methods. Lastly, we also use a threshold of 0.01 to remove small weights in the estimated mixing matrix. Typically, we have t = 250 for each L-BFGS run, and 125 iterations for the quadratic penalty method.