A Bayesian-Symbolic Approach to Reasoning and Learning in Intuitive Physics

Authors: Kai Xu, Akash Srivastava, Dan Gutfreund, Felix Sosa, Tomer Ullman, Josh Tenenbaum, Charles Sutton

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We show that BSP is more sample-efficient compared to neural alternatives on controlled synthetic datasets, demonstrate BSP s applicability to real-world common sense scenes and study BSP s performance on tasks previously used to study human physical reasoning.
Researcher Affiliation Collaboration Kai Xu University of Edinburgh contact@xuk.ai Akash Srivastava MIT-IBM Watson AI Lab akash.srivastava@ibm.com Dan Gutfreund MIT-IBM Watson AI Lab dgutfre@us.ibm.com Felix A. Sosa Harvard University fsosa@fas.harvard.edu Tomer Ullman Harvard University tomerullman@gmail.com Joshua B. Tenenbaum Massachusetts Institute of Technology jbt@mit.edu Charles Sutton University of Edinburgh & Google AI c.sutton@ed.ac.uk
Pseudocode Yes Note appendix A.3 also provides all pseudo-code for algorithms introduced in this section.
Open Source Code Yes Source code as well as training and testing data can be accessed at https://bsp.xuk.ai/.
Open Datasets Yes Source code as well as training and testing data can be accessed at https://bsp.xuk.ai/.; To demonstrate this, we use the PHYS101 dataset (Wu et al., 2016), a dataset of real world physical scenes.; For this purpose, we use the ULLMAN dataset from this study, which consists of 60 videos in which a set of discs interact with each other and mats within a bounded area, as exemplified in figure 9.
Dataset Splits No The paper mentions holding out 20 scenes for evaluation (testing) and using the first k scenes for training, but it does not explicitly describe a separate validation set or its split.
Hardware Specification No The paper does not provide any specific hardware details such as GPU models, CPU types, or memory specifications used for running the experiments.
Software Dependencies No The paper mentions using 'Turing probabilistic programming language' and 'Expr Optimization.jl package' in Julia, but it does not provide specific version numbers for these software components.
Experiment Setup Yes In our work we consider maximally three forces to be learn in the same time, thus setting K = 3; Finally, we add Gaussian noise to each trajectory... and sigma is the noise level.; lambda is the hyper-parameter that controls the regularization; we use the L-BFGS optimizer to solve the lower-level optimization; All scenes are simulated using a physics engine with a time-discretization of 0.02, for 50 frames.