Robustifying Sequential Neural Processes

Authors: Jaesik Yoon, Gautam Singh, Sungjin Ahn

ICML 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental 5. Experiments In this section, we describe our experiments to answer two key questions: i) By resolving the problems of sparse context and obsolete context, can we improve meta-transfer learning? ii) If yes, what are the needed memory sizes and computational overhead during training? We also perform an ablation on RMR to demonstrate the need for flowtracking and flow-interaction. In the rest, we first describe the baselines and our experiment settings. We then describe our results on dynamic 1D regression, dynamic 2D image completion, and dynamic 2D image rendering.
Researcher Affiliation Collaboration 1SAP 2Department of Computer Science, Rutgers University 3Rutgers Center for Cognitive Science.
Pseudocode Yes Algorithm 1 Recurrent Memory Reconstruction
Open Source Code No The paper does not explicitly state that its source code is publicly available or provide a link to a repository for its implementation.
Open Datasets Yes The moving images are taken from the MNIST (Le Cun et al., 1998) and the Celeb A (Liu et al., 2015) datasets, and hence we call these settings moving MNIST and moving Celeb A, respectively.
Dataset Splits No The paper mentions training and evaluating on "held-out sequences of tasks" and refers to a "held-out set" for convergence analysis, but it does not specify explicit train/validation/test dataset splits (e.g., percentages or sample counts) needed for reproduction.
Hardware Specification No The paper mentions training time and wall-clock time but does not provide specific hardware details such as GPU/CPU models or memory specifications used for running the experiments.
Software Dependencies No The paper mentions TensorFlow and PyTorch but does not provide specific version numbers for these or any other software dependencies, which are necessary for reproducible descriptions.
Experiment Setup Yes Hidden unit size in all models is 128. [...] Similarly, we test ASNP-RMR and ASNP-W using different memory sizes and analyze its effect. [...] increasing the latent size in SNP from 128 to 1024.