Quantized Variational Inference

Authors: Amir Dib

NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental To demonstrate the validity and effectiveness of our approach, we considered Bayesian Linear Regression (BLR) on various dataset, a Poisson Generalized Linear Model (GLM) on the frisk data and a Bayesian Neural Network (BNN) on the metro dataset.
Researcher Affiliation Collaboration Amir Dib Université Paris-Saclay, CNRS, ENS Paris-Saclay, Centre Borelli, SNCF, ITNOVEM. 91190, Gif-sur-Yvette, France amir.dib@ens-paris-saclay.fr
Pseudocode Yes Algorithm 1: Monte Carlo Variational Inference. ... Algorithm 2: Quantized Variational Inference.
Open Source Code Yes The complete documented source code to reproduce all experiments is available on Git Hub 2. Footnote 2: https://github.com/amirdib/quantized-variational-inference
Open Datasets Yes To demonstrate the validity and effectiveness of our approach, we considered Bayesian Linear Regression (BLR) on various dataset, a Poisson Generalized Linear Model (GLM) on the frisk data and a Bayesian Neural Network (BNN) on the metro dataset.
Dataset Splits No The paper uses various datasets (Boston, Fires, Life Expect., Frisk, Metro) but does not provide specific details on how these datasets were split into training, validation, and testing sets, nor does it reference predefined splits with citations or explicit percentages/counts.
Hardware Specification No The paper states 'Experiments are performed using python 3.8 with the computational library TensorFlow [1].' but does not provide any specific hardware details such as GPU models, CPU types, or memory specifications used for running the experiments.
Software Dependencies Yes Experiments are performed using python 3.8 with the computational library Tensorflow [1]. Adam [19] optimizer is used with various learning rates α and default β1 = 0.9,β2 = 0.999 values recommended by the author.
Experiment Setup Yes Adam [19] optimizer is used with various learning rates α and default β1 = 0.9,β2 = 0.999 values recommended by the author. ... For all experiments we take a sample size N = 20.