Function Classes for Identifiable Nonlinear Independent Component Analysis

Authors: Simon Buchholz, Michel Besserve, Bernhard Schölkopf

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental An experimental illustration of this setting and Theorem 4 below can be found in Appendix H. The notion of locality in Definition 5 should be understood as non-global and notably does not imply restrictions to a small neighbourhood, as local properties often do. (Page 5) And from Appendix H: In this Appendix, we illustrate our theoretical results from Section 4 and 5 on local identifiability of OCTs vs. general nonlinear functions numerically. We train a normalizing flow, following the setup in [13, 39], to learn the inverse of an orthogonal map f : Rd ! Rd (for d = 2) and investigate its behaviour in regions where the mixing changes continuously.
Researcher Affiliation Academia Simon Buchholz, Michel Besserve & Bernhard Schölkopf Max Planck Institute for Intelligent Systems Tübingen, Germany {sbuchholz,mbesserve,bs}@tue.mpg.de
Pseudocode No The paper contains theoretical discussions, definitions, and proofs, but no section or figure explicitly labeled 'Pseudocode' or 'Algorithm'.
Open Source Code Yes 3. If you ran experiments... (a) Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)? [Yes] See Appendix H (Page 9) and in Appendix H: Our code is available at https://anonymous.4open.science/r/function-classes-for-nonlinear-ica-19EF/
Open Datasets No For the experiments, we used synthetic data generated according to Eq. (1). (Appendix H). This is not a pre-existing publicly available dataset. No specific public dataset with access information is provided.
Dataset Splits No The paper mentions drawing 10000 samples in each epoch for training, but it does not specify any explicit train/validation/test dataset splits, proportions, or sample counts for these subsets.
Hardware Specification Yes All experiments were run on a single NVIDIA A100 GPU.
Software Dependencies No The paper mentions using 'nflows' (implied from reference [13] 'nflows: normalizing flows in Py Torch') and 'Adam' [31] as an optimizer. However, it does not provide specific version numbers for these software components or frameworks (e.g., 'PyTorch 1.9' or 'nflows 0.1.0'), which are required for full reproducibility.
Experiment Setup Yes We use Adam [31] with a learning rate of 10^-3 and train for 100 epochs with a batch size of 256.