PAC-Bayesian Theory Meets Bayesian Inference

Authors: Pascal Germain, Francis Bach, Alexandre Lacoste, Simon Lacoste-Julien

NeurIPS 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In Section 6, we study the Bayesian model selection from a PAC-Bayesian perspective, and illustrate our finding on classical Bayesian regression tasks. (...) To produce Figures 1a and 1b, we reimplemented the toy experiment of Bishop [5, Section 3.5.1]. (...) Figure 1c compares the values of the PAC-Bayesian bounds presented in this paper on a synthetic dataset...
Researcher Affiliation Collaboration Pascal Germain Francis Bach Alexandre Lacoste Simon Lacoste-Julien INRIA Paris École Normale Supérieure, firstname.lastname@inria.fr Google, allac@google.com
Pseudocode No The paper does not contain any clearly labeled pseudocode or algorithm blocks.
Open Source Code No The paper does not contain any statement about releasing source code for the methodology described.
Open Datasets No To produce Figures 1a and 1b, we reimplemented the toy experiment of Bishop [5, Section 3.5.1]. That is, we generated a learning sample of 15 data points according to y = sin(x) + , where x is uniformly sampled in the interval [0, 2 ] and N(0, 1/4) is a Gaussian noise. (...) Figure 1c compares the values of the PAC-Bayesian bounds presented in this paper on a synthetic dataset, where each input x2R20 is generated by a Gaussian x N(0, I). The associated output y2R is given by y=w x + , with kw k= 1/9.
Dataset Splits No No specific validation dataset splits were mentioned. The paper discusses 'training samples' and evaluating 'generalization risk (computed on a test sample of size 1000)'.
Hardware Specification No No specific hardware details (e.g., GPU/CPU models, memory) used for running experiments were mentioned. The paper only describes the synthetic data generation and the models learned.
Software Dependencies No No specific software dependencies with version numbers were mentioned.
Experiment Setup Yes More precisely, for a polynomial model of degree d, we map input x 2 R to a vector φ(x) = [1, x1, x2, . . . , xd] 2 Rd+1, and we fix parameters σ^2 = 1/0.005 and σ'^2 = 1/2. (...) We perform Bayesian linear regression in the input space, i.e., φ(x)=x, fixing σ^2 = 1/100 and σ'^2=2.