Truncated Marginal Neural Ratio Estimation
Authors: Benjamin K Miller, Alex Cole, Patrick Forré, Gilles Louppe, Christoph Weniger
NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We perform experiments on a marginalized version of the simulation-based inference benchmark and two complex and narrow posteriors, highlighting the simulator efficiency of our algorithm as well as the quality of the estimated marginal posteriors.3 Experiments First, we perform experiments to compare TMNRE to other algorithms on standard benchmarks from the simulation-based inference literature. |
| Researcher Affiliation | Academia | Benjamin Kurt Miller University of Amsterdam b.k.miller@uva.nl Alex Cole University of Amsterdam a.e.cole@uva.nl Patrick Forré University of Amsterdam p.d.forre@uva.nl Gilles Louppe University of Liège g.louppe@uliege.be Christoph Weniger University of Amsterdam c.weniger@uva.nl |
| Pseudocode | Yes | Algorithm 1 Truncated Marginal Neural Ratio Estimation (TMNRE) |
| Open Source Code | Yes | Implementation on Git Hub. 1Implementation of experiments at https://github.com/bkmi/tmnre/. Ready-to-use implementation of underlying algorithm at https://github.com/undark-lab/swyft/. |
| Open Datasets | Yes | We compare the performance of our algorithm with other traditional and neural simulation-based inference methods on a selection of problems from the SBI benchmark [18]. |
| Dataset Splits | No | The paper mentions training data and early stopping, which implies validation, but does not provide specific dataset split information (e.g., percentages or counts) for reproduction. |
| Hardware Specification | Yes | We want to thank the DAS-5 computing cluster for access to their Titan X GPUs. |
| Software Dependencies | No | This work uses numpy [54], scipy [55], seaborn [56], matplotlib [57], altair [58, 59], pandas [60, 61] pytorch [62], and jupyter [63]. No specific version numbers are provided in the paper text. |
| Experiment Setup | Yes | Like sequential methods [13, 17] the number of rounds M, the training data per round N (m), and any stopping criteria β are hyperparameters. For further discussion and default values see Appendix A, for bound derivations and limitations see Appendix C and D. |