Identifying through Flows for Recovering Latent Representations

Authors: Shen Li, Bryan Hooi, Gim Hee Lee

ICLR 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Simulations on synthetic data validate the correctness and effectiveness of our proposed method and demonstrate its practical advantages over other existing methods.
Researcher Affiliation Academia Shen Li Institute of Data Science & NUS Graduate School for Integrative Sciences and Engineering National University of Singapore shen.li@u.nus.edu Bryan Hooi & Gim Hee Lee Department of Computer Science, National University of Singapore {bhooi,gimhee.lee}@comp.nus.edu.sg
Pseudocode No No structured pseudocode or algorithm blocks were found in the paper.
Open Source Code No The paper provides a link to the code for a baseline model (iVAE) used for comparison, but no explicit statement or link for the open-source code of their proposed iFlow method.
Open Datasets No We generate a synthetic dataset where the sources are non-stationary Gaussian time-series, as described in (Khemakhem et al., 2019): the sources are divided into M segments of L samples each. The auxiliary variable u is set to be the segment index. The paper describes how the synthetic data is generated but does not provide a direct link or citation for accessing the specific generated dataset used in their experiments.
Dataset Splits No The paper describes the generation of a synthetic dataset and uses mini-batches for training, but does not explicitly provide specific training/test/validation dataset splits (e.g., percentages, sample counts, or specific split files).
Hardware Specification No No specific hardware details (exact GPU/CPU models, processor types with speeds, memory amounts, or detailed computer specifications) used for running its experiments were found.
Software Dependencies No The paper mentions using an 'Adam optimizer' and 'RQ-NSF(AR)' for model architecture, but does not provide specific software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow versions, or specific library versions).
Experiment Setup Yes The mapping λθ that outputs the natural parameters of the conditional factorized exponential family distribution is parameterized by a multi-layer perceptron with the activation of the last layer being the softplus nonlinearity. Additionally, a negative activation is taken on the second-order natural parameters in order to ensure its finiteness. The bijection hφ is modeled by RQ-NSF(AR) (Durkan et al., 2019b) with the flow length of 10 and the bin 8, which gives rise to sufficient flexibility and expressiveness. For each training iteration, we use a mini-batch of size 64, and an Adam optimizer with learning rate chosen in {0.01, 0.001} to optimize the learning objective (15).