Neural Relational Inference with Fast Modular Meta-learning

Authors: Ferran Alet, Erica Weng, Tomás Lozano-Pérez, Leslie Pack Kaelbling

NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We implement our solution in Py Torch (Paszke et al., 2017), using the Adam optimizer (Kingma & Ba, 2014); details and pseudo-code can be found in the appendix and code can be found at https://github.com/FerranAlet/modular-metalearning. We follow the choices of Kipf et al. (2018) whenever possible to make results comparable. Please see the arxiv version for complete results. We begin by addressing two problems on which NRI was originally demonstrated, then show that our approach can be applied to the novel problem of inferring the existence of unobserved nodes. 5.1 Predicting physical systems Two datasets from Kipf et al. (2018) are available online (https://github.com/ethanfetaya/NRI/); in each one, we observe the state of dynamical system for 50 time steps and are asked both to infer the relations between object pairs and to predict their states for the next 10 time steps.
Researcher Affiliation Academia Ferran Alet, Erica Weng, Tomás Lozano Pérez, Leslie Pack Kaelbling MIT Computer Science and Artificial Intelligence Laboratory {alet,ericaw,tlp,lpk}@mit.edu
Pseudocode Yes details and pseudo-code can be found in the appendix
Open Source Code Yes code can be found at https://github.com/FerranAlet/modular-metalearning
Open Datasets Yes Two datasets from Kipf et al. (2018) are available online (https://github.com/ethanfetaya/NRI/); in each one, we observe the state of dynamical system for 50 time steps and are asked both to infer the relations between object pairs and to predict their states for the next 10 time steps.
Dataset Splits No No specific training/validation/test dataset splits (e.g., 80/10/10 split or sample counts) are explicitly stated in the provided text. The paper mentions 'train split' and 'test split' in the context of the meta-learning algorithm's inner loop, but not for the overall dataset partitioning used for evaluating the model.
Hardware Specification No No specific hardware (e.g., GPU models, CPU types, or cloud compute instances) used for experiments is mentioned in the paper.
Software Dependencies No We implement our solution in Py Torch (Paszke et al., 2017), using the Adam optimizer (Kingma & Ba, 2014); No specific version numbers for PyTorch or Adam are provided.
Experiment Setup No The paper states 'details and pseudo-code can be found in the appendix', suggesting that experimental setup details like hyperparameters might be there, but they are not explicitly provided in the main text.