Fearless Stochasticity in Expectation Propagation

Authors: Jonathan So, Richard Turner

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We demonstrate their efficacy on a variety of probabilistic inference tasks.
Researcher Affiliation Academia Jonathan So University of Cambridge js2488@cam.ac.uk Richard E. Turner University of Cambridge The Alan Turing Institute
Pseudocode Yes Algorithm 1 EP (βi=1,ninner=1), power EP (βi =1,ninner=1), and their double-loop variants (ninner>1)... Algorithm 2 EP-η and EP-µ (differences with Algorithm 1 are highlighted in green)
Open Source Code Yes Code for these experiments can be found at https://github.com/cambridge-mlg/fearless-ep.
Open Datasets Yes The data, taken from the 2018 Cooperative Congressional Election Study, related to support for allowing employers to decline coverage of abortions in insurance plans [33, 43]... The data is available at https://github.com/Juan Lopez Martin/MRPCase Study... Neural data were recorded by Tim Blanche in the laboratory of Nicholas Swindale, University of British Columbia, and downloaded from the NSF-funded CRCNS Data Sharing website https://crcns.org/data-sets/vc/pvc-3 [4].
Dataset Splits No The paper focuses on approximate Bayesian inference and does not use traditional train/validation/test dataset splits in the context of supervised learning. It evaluates the inference algorithm's performance on given datasets against an estimated optimum, and uses hyperparameter search with random seeds which acts as a form of validation for experimental setup reproducibility.
Hardware Specification Yes All experiments were executed on 76-core Dell Power Edge C6520 servers, with 256Gi B RAM, and dual Intel Xeon Platinum 8368Q (Ice Lake) 2.60GHz processors. Each individual run was assigned to a single core.
Software Dependencies No Implementations were written in JAX [7], with NUTS [27] used as the underlying sampler. We used the numpyro [42] implementation of NUTS with default settings. For experiments with a NIW base family, we performed mean-to-natural parameter conversions using the method of So [45], with JAXopt [5] used to perform implicit differentiation through the iterative solve. No specific version numbers for JAX, numpyro, or JAXopt are provided.
Experiment Setup Yes We used 500 different hyperparameter settings for each variant, chosen using random search, and repeated each run 5 times using different random seeds for NUTS. All runs of EP-η and EP-µ were performed with nsamp = 1 and ninner = 1... The step size for EP, α, was drawn log-uniformly in the range (10 4, 1), and nsamp was drawn log-uniformly between [d + 2.5, 10000.5) and then rounded to the nearest integer, where d is the dimensionality of z.