Spatially Structured Recurrent Modules

Authors: Nasim Rahaman, Anirudh Goyal, Muhammad Waleed Gondal, Manuel Wuthrich, Stefan Bauer, Yash Sharma, Yoshua Bengio, Bernhard Schölkopf

ICLR 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section, we present a selection of experiments to quantitatively evaluate S2RMs and gauge their performance against strong baselines on two data domains, namely video prediction from crops on the well-known bouncing-balls domain and multi-agent world modelling from partial observations in the challenging Starcraft2 domain. We also include qualitative visualizations on a grid-world task in Appendix A. Additional tables, results and supporting plots can be found in Appendix F.
Researcher Affiliation Academia Nasim Rahaman1,2 Anirudh Goyal2 Muhammad Waleed Gondal1 Manuel Wuthrich1 Stefan Bauer1 Yash Sharma3 Yoshua Bengio2,4 Bernhard Sch olkopf1 1Max-Planck Institute for Intelligent Systems T ubingen, 2Mila, Qu ebec, 3Bethgelab, Eberhard Karls Universit at T ubingen, 4Universit e de Montreal.
Pseudocode No The paper does not contain any clearly labeled pseudocode or algorithm blocks.
Open Source Code No The paper does not provide an explicit statement or link indicating that the source code for the described methodology is publicly available.
Open Datasets Yes The second problem is that of multi-agent world modeling from partial observations in spatial domains, such as the challenging Starcraft2 domain (Samvelyan et al., 2019; Vinyals et al., 2017).
Dataset Splits Yes We use another 1K video sequences of the same length and the same number of balls as a held-out validation set.
Hardware Specification Yes We train all models with batch-size 8 (Starcraft2) or 32 (Bouncing Balls) on a single V100-32GB GPU (each).
Software Dependencies Yes We use Pytorch s (Paszke et al., 2019) Reduce LROn Plateau learning rate scheduler to decay the learning rate by a factor of 2 if the validation loss does not improve by at least 0.01% over the span of 5 epochs.
Experiment Setup Yes All models were trained using Adam Kingma & Ba (2014) with an initial learning rate 0.0003. We use Pytorch s (Paszke et al., 2019) Reduce LROn Plateau learning rate scheduler to decay the learning rate by a factor of 2 if the validation loss does not improve by at least 0.01% over the span of 5 epochs. We initially train all models for 100 epochs, select the best of three successful runs, fine-tune it for another 100 epochs, and finally select the checkpoint with the lowest validation loss (i.e. we early stop). We train all models with batch-size 8 (Starcraft2) or 32 (Bouncing Balls) on a single V100-32GB GPU (each).