Post-Inference Prior Swapping

Authors: Willie Neiswanger, Eric Xing

ICML 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We show empirical results on Bayesian generalized linear models (including linear and logistic regression) with sparsity and heavy tailed priors, and on latent factor models (including mixture models and topic models) with relational priors over factors (e.g. diversity-encouraging, agglomerate-encouraging, etc.).
Researcher Affiliation Academia 1Carnegie Mellon University, Machine Learning Department, Pittsburgh, USA 2CMU School of Computer Science. Correspondence to: Willie Neiswanger <willie@cs.cmu.edu>.
Pseudocode Yes We give pseudocode for the full prior swap importance sampling procedure in Alg. 1.
Open Source Code No The paper mentions two dataset URLs but does not provide any links to the source code for the described methodology. It also does not explicitly state that the code is released or available in supplementary materials.
Open Datasets Yes For linear regression, we use the Year Prediction MSD data set*, (n = 515345, d = 90), in which regression is used to predict the year associated with a a song, and for logistic regression we use the Mini Boo NE particle identification data set , (n = 130065, d = 50), in which binary classification is used to distinguish particles. *https://archive.ics.uci.edu/ml/datasets/ Year Prediction MSD https://archive.ics.uci.edu/ml/datasets/ Mini Boo NE+particle+identification
Dataset Splits No The paper mentions the total size (n) of the datasets used but does not explicitly provide specific training, validation, or test dataset splits, percentages, or absolute sample counts.
Hardware Specification No The paper does not provide specific hardware details such as GPU/CPU models, processor types, or memory amounts used for running its experiments.
Software Dependencies No The paper does not provide specific ancillary software details with version numbers (e.g., library or solver names with version numbers) needed to replicate the experiment.
Experiment Setup No The paper does not provide specific experimental setup details, such as concrete hyperparameter values, optimizer settings, or other training configurations in the main text.