Functional Variational Inference based on Stochastic Process Generators

Authors: Chao Ma, José Miguel Hernández-Lobato

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our experiments show that FVI consistently outperforms weight-space and function-space VI methods on several tasks, which validates the effectiveness of our approach.
Researcher Affiliation Academia Chao Ma University of Cambridge Cambridge, UK cm905@cam.ac.uk José Miguel Hernández-Lobato University of Cambridge Cambridge, UK jmh233@cam.ac.uk
Pseudocode Yes Algorithm 1 Functional Variational Inference (FVI)
Open Source Code No The paper mentions that 'F-BNNs are based on the code kindly open-sourced by [47]' which refers to a third-party's code, not the code for the authors' own method (FVI).
Open Datasets Yes We consider multivariate regression tasks based on 9 different UCI datasets. We also compare with three function-space BNN inference methods: VIP-BNNs, VIP-Neural processes [28], and f-BNNs. Finally, we include comparisons to function space particle optimization [50] in Appendix C.7 for reference purpose. All inference methods are based on the same BNN priors whenever applicable. For experimental settings, we follow [28]. Each dataset was randomly split into train (90%) and test sets (10%). This was repeated 10 times and results were averaged.
Dataset Splits Yes Each dataset was randomly split into train (90%) and test sets (10%).
Hardware Specification No The paper does not specify any hardware details like GPU/CPU models, memory, or specific cloud instance types used for experiments.
Software Dependencies No The paper mentions various software components and methods (e.g., 'Bayes-by-Backprop [4]', 'PyTorch' in general context), but does not provide specific version numbers for any of them.
Experiment Setup Yes The hyperparameter settings are consistent with [47] except that we used a smaller batchsize (32).