Pseudo-Likelihood Inference

Authors: Theo Gruner, Boris Belousov, Fabio Muratore, Daniel Palenicek, Jan R. Peters

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The effectiveness of PLI is evaluated on four classical SBI benchmark tasks and on a highly dynamic physical system, showing particular advantages on stochastic simulations and multi-modal posterior landscapes.
Researcher Affiliation Collaboration Theo Gruner 1,2 Boris Belousov 3 Fabio Muratore 4 Daniel Palenicek 1,2 Jan Peters 1,2,3,5 1 Intelligent Autonomous Systems Group, Technical University of Darmstadt 2 hessian.AI 3 German Research Center for AI (DFKI) 4 Bosch Center for Artificial Intelligence 5 Centre for Cognitive Science
Pseudocode Yes Algorithm 1 Pseudo-Likelihood Inference (PLI)
Open Source Code Yes Our implementation is based on Wasserstein-ABC [3], but instead of the employed r-hit kernel [30], our implementation is based on population Monte Carlo (Alogrithm 3 [31]) because we observed improved performance in preliminary studies. 1https://github.com/theogruner/pseudo_likelihood_inference
Open Datasets Yes We evaluate PLI on four common benchmarking tasks within the SBI community [32]: Gaussian Location, a Gaussian Mixture Model, Simple-Likelihood Complex-Posterior, and SIR.
Dataset Splits No The paper describes the number of reference observations (N) and simulations (M) used for conditioning and evaluation, but it does not specify explicit training, validation, and test splits for the datasets in a traditional sense. The data is generated from simulators based on parameters.
Hardware Specification Yes All experiments are implemented in JAX [4], and each ran on a single Nvidia RTX 3090.
Software Dependencies No The paper mentions software like 'JAX' and 'OTT' but does not provide specific version numbers for these or any other software dependencies.
Experiment Setup Yes Table C.3: Hyper-parameter settings of the SBI methods as used for the experiments in Section 4. Parameter Value Likelihood kernel Exponential Kernel Trust-region threshold ε 0.5 Model Neural Spline Flow (NSF) ... Learning rate 1e-5 Epochs 20 Train samples per iteration 5000 Batch size 125