Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

Learning Likelihood-Free Reference Priors

Authors: Nicholas George Bishop, Daniel Jarne Ornia, Joel Dyer, Ani Calinescu, Michael J. Wooldridge

ICML 2025 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our experiments demonstrate that good approximations to reference priors for simulation models are in this way attainable, providing a first step towards the development of likelihood-free objective Bayesian inference procedures. ... Here, we present a series of experiments to assess the RP-learning methods described in Section 4.
Researcher Affiliation Academia 1University of Oxford. Correspondence to: Nicholas Bishop <EMAIL>, Daniel Jarne Ornia <EMAIL>, Joel Dyer <EMAIL>.
Pseudocode Yes Algorithm 1 Flow Pretraining Procedure pretrain Algorithm 2 Training Procedure with Variational Lower Bounds Algorithm 3 Flow Pretraining Procedure pretrain-conditional Algorithm 4 Training for GED
Open Source Code Yes Code available at https://github.com/joelnmdyer/lf_reference_priors.
Open Datasets Yes We next consider the popular SBI benchmark task SLCPD (Lueckmann et al., 2021), based on the experiment first introduced by Papamakarios et al. (2019). ... The g-and-k model appears frequently as a benchmark case study for SBI methods (see, e.g., Fearnhead & Prangle, 2012).
Dataset Splits No The paper describes generating data from simulators (e.g., 'n samples are generated iid from N(ยต, ฯƒ2)' or 'iid data is generated for t = 1, . . . , n') rather than using predefined splits of a static dataset. Therefore, specific train/test/validation dataset splits are not provided in the conventional sense.
Hardware Specification No The paper does not provide specific details about the hardware used to run the experiments, such as GPU or CPU models.
Software Dependencies No The paper mentions 'Py Torch: An Imperative Style, High-Performance Deep Learning Library, 2019.' and 'Adam (Kingma, 2014)'. While PyTorch is a key software component, '2019' is a publication year, not a specific version number. No other software components are mentioned with specific version numbers.
Experiment Setup Yes Table 4. Hyperparameter settings for Info NCE and SMILE experiments. Table 6. Hyperparameter settings for GED.