Nonlinear ICA Using Volume-Preserving Transformations
Authors: Xiaojiang Yang, Yi Wang, Jiacheng Sun, Xing Zhang, Shifeng Zhang, Zhenguo Li, Junchi Yan
ICLR 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We implement the framework by volume-preserving flow-based models, and verify our theory by experiments on artificial data and synthesized images. Moreover, results on real-world images indicate that our framework can disentangle interpretable features. ... 6 EXPERIMENTS |
| Researcher Affiliation | Collaboration | 1 Shanghai Jiao Tong University, 2 Huawei Noah s Ark Lab |
| Pseudocode | No | The paper does not contain any structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide concrete access to source code for the methodology described. |
| Open Datasets | Yes | Datasets. We run experiments both on an artificial dataset and a synthetic image dataset called Triangles , as well as on MNIST (Le Cun et al., 1998) and Celeb A (Liu et al., 2015). The generation processes of the artificial dataset and the synthetic images are described in Appendix D. |
| Dataset Splits | No | The paper does not provide specific dataset split information (e.g., percentages, sample counts) for training, validation, and testing. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., exact GPU/CPU models or memory amounts) used for running its experiments. |
| Software Dependencies | No | The paper mentions software components like 'Adam optimizer', 'GIN', and 'Real NVP', but it does not provide specific version numbers for these or other software dependencies (e.g., programming languages, libraries, or frameworks) required for reproducibility. |
| Experiment Setup | No | The paper states, 'The parameters of each network are updated by minimizing the loss function in Eq. 10 using an Adam optimizer (Kingma & Ba, 2014). Details of the networks and their optimization refer to (Sorrenson et al., 2020).', but it does not provide specific experimental setup details such as concrete hyperparameter values or training configurations in the main text. |