Likelihood-free MCMC with Amortized Approximate Ratio Estimators

Authors: Joeri Hermans, Volodimir Begy, Gilles Louppe

ICML 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The accuracy of our approach is demonstrated on a variety of benchmarks against well-established techniques. Scientific applications in physics show its applicability.
Researcher Affiliation Academia 1University of Liège, Belgium 2University of Vienna, Austria. Correspondence to: Joeri Hermans <joeri.hermans@doct.uliege.be>.
Pseudocode Yes Algorithm 1 Optimization of dφ(x, θ).
Open Source Code Yes Code is available at https://github.com/montefiore-ai/hypothesis.
Open Datasets No The paper describes various benchmark problems (e.g., Tractable problem, Detector calibration, Population model, M/G/1 queuing model) for which data is generated using specified forward models and simulators (e.g., 'pythia simulator (Sjöstrand et al., 2008)', 'Lotka-Volterra model (Lotka, 1920)'). However, it does not provide access information (links, DOIs, or citations to pre-existing public datasets) for the raw data used for training or evaluation. The data is generated during the experiment.
Dataset Splits No The paper mentions 'We allocate a simulation budget of one million forward passes' and that 'All experiments are repeated 25 times.' It also discusses 'rounds' for sequential approaches. While these are details about the experimental process, there is no explicit mention of specific percentages or sample counts for traditional training, validation, and test splits of a fixed dataset.
Hardware Specification No The paper does not provide specific details about the hardware used for running the experiments, such as GPU models, CPU types, or memory specifications. It only describes the simulated environments or models being studied.
Software Dependencies No The paper mentions several software components like 'pythia simulator', 'pythiamill', 'autolens', and 'RESNET-18' (a model architecture), and cites 'Pytorch'. However, it does not provide specific version numbers for these software dependencies, which is necessary for reproducible setup.
Experiment Setup Yes The paper specifies several concrete experimental setup details: 'We allocate a simulation budget of one million forward passes', 'All experiments are repeated 25 times', 'Our ratio estimator is a low-capacity MLP with 3 layers and 50 hidden units', and 'In every round t, 10,000 sample-parameter pairs are drawn from the joint p(x, θ) with prior pt(θ) for training.' It also mentions the termination condition: 'terminating the algorithm' when AUC scores reach .50. These are concrete configurations.