Life-Long Disentangled Representation Learning with Cross-Domain Latent Homologies

Authors: Alessandro Achille, Tom Eccles, Loic Matthey, Chris Burgess, Nicholas Watters, Alexander Lerchner, Irina Higgins

NeurIPS 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Compared to baselines with entangled representations, our approach is able to reason beyond surface-level statistics and perform semantically meaningful cross-domain inference. We use a sequence of three datasets: (1) a moving version of Fashion-MNIST [54] (shortened to moving Fashion), (2) MNIST [31], and (3) a moving version of MNIST (moving MNIST). Figure 2 (bottom) shows that both VASE and CCI-VAE learn progressively more informative latent representations when exposed to each dataset s, as evidenced by the increasing classification accuracy and decreasing mean squared error (MSE) measures within each stage of training. Here we perform a full ablation study to test the importance of the proposed components for unsupervised life-long representation learning
Researcher Affiliation Collaboration UCLA, Deep Mind achille@cs.ucla.edu, {eccles,lmatthey,cpburgess,nwatters,lerchner,irinah}@google.com
Pseudocode No The paper describes the model and framework in text and mathematical equations but does not contain any explicit 'Pseudocode' or 'Algorithm' blocks or figures.
Open Source Code No The paper does not contain any statements about releasing open-source code, nor does it provide links to a code repository.
Open Datasets Yes We use a sequence of three datasets: (1) a moving version of Fashion-MNIST [54] (shortened to moving Fashion), (2) MNIST [31], and (3) a moving version of MNIST (moving MNIST). Hence, we trained VASE on a sequence of two visually challenging DMLab-30 [7] datasets
Dataset Splits No The paper refers to training and evaluation but does not specify explicit train/validation/test splits, percentages, or sample counts, nor does it reference predefined splits with citations for reproducibility of data partitioning.
Hardware Specification No The paper does not provide any specific details regarding the hardware used for experiments, such as GPU models, CPU specifications, or memory.
Software Dependencies No The paper mentions general software components like 'Adam' (optimizer) and 'VASE' (the proposed model), but it does not provide specific version numbers for any software dependencies or libraries.
Experiment Setup No The paper mentions hyperparameters like λ, γ, and τ, and states that further details on hyperparameters can be found in Appendices A.2, A.3, and A.6, indicating that comprehensive experimental setup details are not provided in the main text.