Temporal Difference Variational Auto-Encoder

Authors: Karol Gregor, George Papamakarios, Frederic Besse, Lars Buesing, Theophane Weber

ICLR 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The paper includes sections like '5 EXPERIMENTS', '5.1 PARTIALLY OBSERVED MINIPACMAN', '5.2 MOVING MNIST', '5.3 NOISY HARMONIC OSCILLATOR', and '5.4 DEEPMIND LAB ENVIRONMENT', where empirical studies are conducted, data is analyzed, and performance metrics are reported, such as in Figure 2 showing ELBO and estimated negative log probability, and comparisons to baselines are made in Section 5.1: 'TD-VAE outperforms both baselines, whereas the mean-field model is the least well-performing.'
Researcher Affiliation Industry All authors are listed with 'Deep Mind' affiliation and '@google.com' email addresses, such as '{karolg, gpapamak, fbesse, lbuesing, theophane}@google.com', indicating an industry affiliation.
Pseudocode No The paper includes 'Figure 1: Diagram of TD-VAE' and mathematical equations in Appendix D ('The set of equations describing the system are as follows.'), but it does not provide any structured pseudocode or algorithm blocks.
Open Source Code No The paper does not contain any explicit statements about releasing source code or links to a code repository.
Open Datasets Yes The paper mentions public datasets like 'Mini Pacman environment (Racanière et al., 2017)' and 'sequences of images of MNIST digits'. It also refers to 'Deep Mind Lab environment (Beattie et al., 2016)'. These include standard, publicly available datasets or datasets whose sources are properly cited.
Dataset Splits No The paper does not explicitly state the dataset splits for training, validation, and testing with specific percentages, sample counts, or references to predefined validation splits. It only mentions 'a test set' in Section 5.1.
Hardware Specification No The paper does not explicitly describe the hardware used for its experiments, such as specific GPU or CPU models.
Software Dependencies No The paper mentions software components and techniques such as 'LSTM', 'Adam optimizer', 'Re LU nonlinearity', 'convolutional DRAW', and 'convolutional LSTMs', but it does not provide specific version numbers for any of these or other software dependencies, which are necessary for reproducibility.
Experiment Setup Yes Appendix D, 'FUNCTIONAL FORMS AND PARAMETER CHOICES', provides specific experimental setup details, including: 'We use the Adam optimizer with learning rate 0.0005.' and 'The hidden layer of the D maps is 50; the size of each zl t is 8. Belief states have size 50.' It also details training parameters for specific experiments, such as 'We train the model with t1 and t2 separated by a random amount t2 t1 from the interval [1, 4]' for Moving MNIST, and 'We train on sequences of length 200, with t2 t1 taking values chosen at random from [1, 10] with probability 0.8 and from [1, 120] with probability 0.2' for Noisy Harmonic Oscillator.