Generalizing Nonlinear ICA Beyond Structural Sparsity

Authors: Yujia Zheng, Kun Zhang

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Theoretical claims are supported empirically on both synthetic and real-world datasets.
Researcher Affiliation Academia Yujia Zheng1, Kun Zhang1,2 1 Carnegie Mellon University 2 Mohamed bin Zayed University of Artificial Intelligence {yujiazh, kunz1}@cmu.edu
Pseudocode No The paper does not contain any sections or figures explicitly labeled as 'Pseudocode' or 'Algorithm', nor does it present any structured, code-like procedural steps.
Open Source Code No The paper states that experiments are conducted using 'official implementation of GIN' and 'GLOW is a part of FrEIA' and provides links to these external, existing open-source projects (https://github.com/vislearn/FrEIA and https://github.com/VLL-HD/GIN). However, it does not explicitly state that the authors' own specific modifications or code for their proposed methodology are being released or provided.
Open Datasets Yes To study how reasonable the proposed theories are w.r.t. the practical generating process of observational data in complex scenarios, we conduct experiments on 'Triangles' (Yang et al., 2022) and EMNIST (Cohen et al., 2017) datasets.
Dataset Splits No The paper specifies the total sample sizes for synthetic datasets (2000) and the total number of images for 'Triangles' (60,000) and EMNIST (240,000). However, it does not provide explicit details about how these datasets were split into training, validation, or test sets (e.g., specific percentages or sample counts for each split).
Hardware Specification No The paper does not provide any specific hardware details such as CPU or GPU models, memory specifications, or cloud computing instance types used for running the experiments.
Software Dependencies No The paper mentions using Generative Flow (GLOW) and General Incompressible-flow Network (GIN), and references the FrEIA framework. However, it does not specify version numbers for these software components, nor for broader dependencies like programming languages (e.g., Python version) or deep learning libraries (e.g., PyTorch, TensorFlow versions).
Experiment Setup Yes The parameters used for training include a learning rate of 0.01 and a batch size of 200. Additionally, the number of coupling layers for both GIN and GLOW is set as 10. In regard to the 'Triangles' dataset, ... the learning rate is set at 3 10 4 and the batch size is 100. Concerning the EMNIST dataset, ... The learning rate and batch size used for these experiments are 3 10 4 and 240, respectively.