A Compositional Object-Based Approach to Learning Physical Dynamics

Authors: Michael Chang, Tomer Ullman, Antonio Torralba, Joshua Tenenbaum

ICLR 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate the efficacy of our approach on simple rigid body dynamics in two-dimensional worlds. By comparing to less structured architectures, we show that the NPE s compositional representation of the structure in physical interactions improves its ability to predict movement, generalize across variable object count and different scene configurations, and infer latent properties of objects such as mass.
Researcher Affiliation Academia Michael B. Chang*, Tomer Ullman**, Antonio Torralba*, and Joshua B. Tenenbaum** *Department of Electrical Engineering and Computer Science, MIT **Department of Brain and Cognitive Sciences, MIT {mbchang,tomeru,torralba,jbt}@mit.edu
Pseudocode No The paper describes the architecture and functions (e.g., encoder, decoder) but does not provide any formal pseudocode or algorithm blocks.
Open Source Code No The paper does not include an explicit statement about releasing source code for the described methodology, nor does it provide a link to a code repository.
Open Datasets No The paper states, 'Using the matter-js physics engine, we evaluate the NPE on worlds of balls and obstacles.' and 'For a world of k objects, we generate 50,000 such trajectories.' indicating that the authors generated their own datasets rather than using a pre-existing, publicly accessible dataset with a formal citation or link.
Dataset Splits Yes We used minibatches of size 50 and used a 70-15-15 split for training, validation, and test data.
Hardware Specification No The paper does not specify any particular hardware used for running experiments, such as specific GPU or CPU models.
Software Dependencies No The paper mentions 'neural network libraries built by Collobert et al. (2011); L eonard et al. (2015)' but does not provide specific version numbers for these or other software dependencies required for reproducibility.
Experiment Setup Yes We trained all models using the rmsprop (Tieleman and Hinton, 2012) backpropagation algorithm with a Euclidean loss for 1,200,000 iterations with a learning rate of 0.0003 and a learning rate decay of 0.99 every 2,500 training iterations, beginning at iteration 50,000. We used minibatches of size 50 and used a 70-15-15 split for training, validation, and test data.