Softstar: Heuristic-Guided Probabilistic Inference

Authors: Mathew Monfort, Brenden M. Lake, Brenden M. Lake, Brian Ziebart, Patrick Lucey, Josh Tenenbaum

NeurIPS 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We present the algorithm, analyze approximation guarantees, and compare performance with simulation-based inference on two distinct complex decision tasks.
Researcher Affiliation Collaboration Mathew Monfort Computer Science Department University of Illinois at Chicago Chicago, IL 60607 mmonfo2@uic.edu Brenden M. Lake Center for Data Science New York University New York, NY 10003 brenden@nyu.edu Brian D. Ziebart Computer Science Department University of Illinois at Chicago Chicago, IL 60607 bziebart@uic.edu Patrick Lucey Disney Research Pittsburgh Pittsburgh, PA 15232 patrick.lucey@disneyresearch.com Joshua B. Tenenbaum Brain and Cognitive Sciences Department Massachusetts Institute of Technology Cambridge, MA 02139 jbt@mit.edu
Pseudocode Yes Algorithm 1 Softstar: Greedy forward and approximate backward search with fixed ordering
Open Source Code No The paper does not provide any explicit statements or links regarding the availability of its source code.
Open Datasets Yes The data consists of a randomly separated training set of 400 drawn characters, each with a unique demonstrated trajectory, and a separate test set of 52 examples where the handwritten characters are converted into skeletons of nodes within a unit character frame [14]. [14] Brenden M Lake, Ruslan Salakhutdinov, and Josh Tenenbaum. One-shot learning by inverting a compositional causal process. In NIPS, 2013.
Dataset Splits No The paper mentions training and test sets but does not explicitly specify a validation set split for either of the datasets used.
Hardware Specification Yes Results were collected on an Intel i7-3720QM CPU at 2.60GHz.
Software Dependencies No The paper does not provide specific version numbers for any software dependencies used in the experiments.
Experiment Setup No The paper describes state and feature representations and heuristic functions. It mentions training epochs ('~5 hours to train 10 epochs'), but it does not specify concrete hyperparameter values such as learning rates, batch sizes, or optimizer settings in the main text. It refers to the appendix for 'more information'.