Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Score Matched Neural Exponential Families for Likelihood-Free Inference
Authors: Lorenzo Pacchiardi, Ritabrata Dutta
JMLR 2022 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We validate our methods on toy models with known likelihood and a large-dimensional time-series model. ... 5. Simulation Studies We perform simulation studies with our proposed approaches (Exc-SM, Exc-SSM, ABC-SM and ABC-SSM) and we compare with:... |
| Researcher Affiliation | Academia | Lorenzo Pacchiardi EMAIL Department of Statistics University of Oxford Oxford, OX1 3LB United Kingdom Ritabrata Dutta EMAIL Department of Statistics University of Warwick Coventry, CV4 7AL United Kingdom |
| Pseudocode | Yes | Algorithm 1 Original exchange MCMC algorithm (Murray et al., 2012). ... Algorithm 2 Exchange MCMC algorithm with bridging (Murray et al., 2012). ... Algorithm 3 Exchange MCMC algorithm (Murray et al., 2012) with inner MCMC. ... Algorithm 4 Exchange MCMC algorithm (Murray et al., 2012) with inner MCMC and bridging. |
| Open Source Code | Yes | Code for reproducing the experiments is available at https://github.com/LoryPack/SM-Exp-Fam-LFI. |
| Open Datasets | Yes | We validate our methods on toy models with known likelihood and a large-dimensional time-series model. ... Specifically, we consider three exponential family models and two time-series models (AR(2) and MA(2)) for which the exact likelihood is available, as well as a large-dimensional model with unknown likelihood (the Lorenz96 model, Lorenz 1996; Wilks 2005). |
| Dataset Splits | Yes | For all techniques, we use 10^4 training samples. ... in denotes MCC on training data used to find the best transformation, while out denote MCC on test data. We used 500 samples in both training and test data sets. |
| Hardware Specification | No | The paper mentions "computational resources" in the Acknowledgments and "Computations are done on a CPU machine with 8 cores." in Figure 8 caption. This is not specific enough to identify exact CPU/GPU models or memory specifications. |
| Software Dependencies | No | The paper mentions "Python library ABCpy (Dutta et al., 2021b)" and "Pytorch (Paszke et al., 2019)" but does not provide specific version numbers for these software components. |
| Experiment Setup | Yes | For all techniques, we use 10^4 training samples. All NNs use Soft Plus nonlinearity (NNs using the more common Re LU nonlinearity cannot be used with SM and SSM as they have null second derivative with respect to the input). ... In Exc-SM and Exc-SSM, we run Exchange MCMC for 20000 steps, of which the first 10000 are burned-in. During burn-in, at intervals of 100 outer steps, we tune the proposal sizes according to the acceptance rate in the previous 100 steps. ... Table 8 (Appendix F) provides a detailed list of hyperparameters including learning rates, number of training samples, epochs, early stopping parameters, and scheduler usage for different models and methods. |