Is Learning Summary Statistics Necessary for Likelihood-free Inference?

Authors: Yanzhi Chen, Michael U. Gutmann, Adrian Weller

ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments on diverse inference tasks with different data types validate our hypothesis. 5. Experiments
Researcher Affiliation Academia 1Department of Engineering, Cambridge University, UK 2School of Informatics, The University of Edinburgh, UK.
Pseudocode Yes Algorithm 1 Slice sufficient statistics learning; Algorithm 2 SNL with slice sufficient statistics
Open Source Code No No explicit statement of releasing open-source code for the methodology described in this paper or a direct link to a code repository was found.
Open Datasets No No concrete access information (link, DOI, formal citation with authors/year) for a publicly available or open dataset was provided. The paper states: "It only requires us to sample (i.e. simulate) data from the model." and "Input: simulated data D = {θ(j), x(j)}n j=1".
Dataset Splits Yes For all our experiments, we use early stopping to train all neural networks, where we use 80% of the data in training and 20% in validation (the patience threshold is 500 iterations).
Hardware Specification No No specific hardware details (e.g., GPU models, CPU types, memory amounts, or cloud instance names) used for running experiments were mentioned. The paper only discusses 'execution time' without specifying the hardware on which it was measured.
Software Dependencies No No specific software dependency versions were provided. The paper mentions using 'masked autoregressive flow (MAF)' for density estimation and 'Adam' optimizer, but without version numbers for these or other libraries.
Experiment Setup Yes Throughout the experiments we use M = 8 slices and set d = K (except for the experiments where we select d according to Section 3.2) and d = 2. For all our experiments, we use early stopping to train all neural networks... The learning rate is 5 10 4. A batch size of 200 is used for all networks.