Optimization Monte Carlo: Efficient and Embarrassingly Parallel Likelihood-Free Inference

Authors: Ted Meeds, Max Welling

NeurIPS 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The procedure is validated on six experiments. In Section 4 extensive evidence of the correctness and efficiency of our approach. To demonstrate correctness, we show histograms of weighted samples along with the true posterior (when known) and, for three experiments, the exact OMC weighted samples (when the exact Jacobian and optimal θ is known). To demonstrate efficiency, we compute the mean simulations per sample (SS) the number of simulations required to reach an ϵ threshold and the effective sample size (ESS).
Researcher Affiliation Academia Edward Meeds Informatics Institute University of Amsterdam tmeeds@gmail.com Max Welling Informatics Institute University of Amsterdam welling.max@gmail.com Donald Bren School of Information and Computer Sciences University of California, Irvine, and Canadian Institute for Advanced Research.
Pseudocode No The paper describes algorithms textually and mathematically but does not include any structured pseudocode or algorithm blocks.
Open Source Code No The paper references 'Autograd. github.com/HIPS/autograd.' as an example of automatic differentiation libraries (third-party), but it does not state that the code for the described Optimization Monte Carlo (OMC) method is open-source or provide a link to it.
Open Datasets No The paper focuses on simulation-based models and describes how pseudo-samples are generated by simulators (e.g., 'The simulator can generate data according to the normal distribution'). It does not use or provide access information for a pre-existing, publicly available dataset in the traditional sense.
Dataset Splits No The paper discusses 'n = 5000 and repeated runs 5 times' and 'epsilon rounds' to mimic SMC, but it does not describe specific train/validation/test splits of a dataset in the conventional machine learning sense, as data is primarily generated via simulations.
Hardware Specification No The paper does not provide any specific hardware details such as GPU or CPU models, memory specifications, or types of computing resources used for the experiments.
Software Dependencies No The paper mentions software tools like 'Newton s method' and 'Autograd' ([14] 'github.com/HIPS/autograd'), but it does not specify version numbers for these or any other software components, which is required for reproducibility.
Experiment Setup Yes Unless otherwise noted, we used n = 5000 and repeated runs 5 times; lack of error bars indicate very low deviations across runs. Each simulator uses different optimization procedures, including Newton s method for smooth simulators, and random walk optimization for others; Jacobians were computed using one-sided finite differences. To limit computational expense we placed a max of 1000 simulations per sample per round for all algorithms. Gaussian noise r N(0, 102) is added at each full time-step. Lognormal priors are placed over θ. constant initial populations of 100 for both prey and predator. In our experiments θ = [1.0, 5.0, 0.2].