Hierarchical Gaussian Process Priors for Bayesian Neural Network Weights
Authors: Theofanis Karaletsos, Thang D. Bui
NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In this section, we evaluate the proposed priors and inference scheme on several regression and classification datasets, and study effects of per-datapoint priors as we proposed on extrapolation, interpolation, and out-of-distribution data. |
| Researcher Affiliation | Collaboration | Theofanis Karaletsos Facebook theokara@fb.com Thang D. Bui Uber AI and University of Sydney thang.bui@uber.com |
| Pseudocode | No | The paper does not contain any pseudocode or clearly labeled algorithm blocks. |
| Open Source Code | No | The paper states 'Additional updates and results will be available on https://arxiv.org/abs/2002.04033.' which is a link to the arXiv paper itself, not source code. |
| Open Datasets | Yes | We first illustrate the performance of the proposed model on a classification example. We generate a dataset of 100 data points and four classes |
| Dataset Splits | No | The paper mentions 'training points' and 'test sets' but does not specify explicit train/validation/test dataset splits with percentages, sample counts, or clear methodologies for reproduction. |
| Hardware Specification | No | The paper does not explicitly describe the hardware used for running its experiments, such as specific GPU/CPU models or memory details. |
| Software Dependencies | No | The paper does not provide specific software dependencies with version numbers. |
| Experiment Setup | Yes | We use M = 50 inducing weights for all experiments in this section. Details for the experimental settings and additional experiments are included in the appendices. |