Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

Truncated Marginal Neural Ratio Estimation

Authors: Benjamin K Miller, Alex Cole, Patrick Forré, Gilles Louppe, Christoph Weniger

NeurIPS 2021 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We perform experiments on a marginalized version of the simulation-based inference benchmark and two complex and narrow posteriors, highlighting the simulator efficiency of our algorithm as well as the quality of the estimated marginal posteriors.3 Experiments First, we perform experiments to compare TMNRE to other algorithms on standard benchmarks from the simulation-based inference literature.
Researcher Affiliation Academia Benjamin Kurt Miller University of Amsterdam EMAIL Alex Cole University of Amsterdam EMAIL Patrick Forré University of Amsterdam EMAIL Gilles Louppe University of Liège EMAIL Christoph Weniger University of Amsterdam EMAIL
Pseudocode Yes Algorithm 1 Truncated Marginal Neural Ratio Estimation (TMNRE)
Open Source Code Yes Implementation on Git Hub. 1Implementation of experiments at https://github.com/bkmi/tmnre/. Ready-to-use implementation of underlying algorithm at https://github.com/undark-lab/swyft/.
Open Datasets Yes We compare the performance of our algorithm with other traditional and neural simulation-based inference methods on a selection of problems from the SBI benchmark [18].
Dataset Splits No The paper mentions training data and early stopping, which implies validation, but does not provide specific dataset split information (e.g., percentages or counts) for reproduction.
Hardware Specification Yes We want to thank the DAS-5 computing cluster for access to their Titan X GPUs.
Software Dependencies No This work uses numpy [54], scipy [55], seaborn [56], matplotlib [57], altair [58, 59], pandas [60, 61] pytorch [62], and jupyter [63]. No specific version numbers are provided in the paper text.
Experiment Setup Yes Like sequential methods [13, 17] the number of rounds M, the training data per round N (m), and any stopping criteria β are hyperparameters. For further discussion and default values see Appendix A, for bound derivations and limitations see Appendix C and D.