HNPE: Leveraging Global Parameters for Neural Posterior Estimation
Authors: Pedro Rodrigues, Thomas Moreau, Gilles Louppe, Alexandre Gramfort
NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We validate quantitatively our proposal on a motivating example amenable to analytical solutions and then apply it to invert a well known non-linear model from computational neuroscience. All experiments described next are implemented with Python |
| Researcher Affiliation | Academia | Pedro L. C. Rodrigues Inria, CEA, Université Paris-Saclay, France Thomas Moreau Inria, CEA, Université Paris-Saclay, France Gilles Louppe University of Liège, Belgium Alexandre Gramfort Inria, CEA, Université Paris-Saclay, France |
| Pseudocode | Yes | Algorithm 1: Sequential posterior estimation for hierarchical models with global parameters |
| Open Source Code | Yes | Code is available in the supplementary materials. The code required for reproducing most of the results presented in the paper is available at https://github.com/plcrodrigues/HNPE |
| Open Datasets | Yes | Data consists of recordings taken from a public dataset (Cattan et al., 2018) |
| Dataset Splits | No | The paper does not explicitly provide specific training/validation/test dataset splits with percentages or sample counts for reproducibility. |
| Hardware Specification | No | This work was granted access to the HPC resources of IDRIS under allocations 2021-AD011011172R1 made by GENCI. |
| Software Dependencies | No | All experiments described next are implemented with Python (Python Software Fundation, 2017) and the sbi package (Tejero-Cantero et al., 2020) combined with Py Torch (Paszke et al., 2019), Pyro (Bingham et al., 2018) and nflows (Durkan et al., 2020a) for posterior estimation |
| Experiment Setup | Yes | In all experiments, we use the Adam optimizer (Kingma and Ba, 2014) with default parameters, a learning rate of 5.10 4 and a batch size of 100. Our approximation to the posterior distribution consists of two conditional neural spline flows of linear order (Durkan et al., 2019), qφ1 and qφ2, both conditioned by dense neural networks with one layer and 20 hidden units. The normalizing flows qφ1 and qφ2 used in our approximations are masked autoregressive flows (MAF) (Papamakarios et al., 2017) consisting of three stacked masked autoencoders (MADE) (Germain et al., 2015), each with two hidden layers of 50 units, and a standard normal base distribution as input to the normalizing flow. |