Analysis of Variational Bayesian Factorizations for Sparse and Low-Rank Estimation

Authors: David Wipf

ICML 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental To illustrate these effects, we conducted the following Monte Carlo experiment. First we generate a sparse vector x with x 0 = 20 nonzero elements randomly located with iid N(0, 1) nonzero elements. Next we generate a design matrix via Φ = Pn i=1 1 iη uiv i , where each vector ui R50 and vi R100 are distributed iid with N(0, 1) elements. We then normalized columns of Φ to have unit ℓ2 norm. The exponent parameter η is chosen from the interval [0, 2], the effect being that larger values of η will introduce larger correlations into the resulting columns of Φ, meaning that Φ Φ will have stronger offdiagonal elements. Finally we generate a data vector via y = Φx. We then run various VB sparse estimation algorithms and evaluate them using two metrics, normalized MSE D x ˆx 2 2 x 2 2 E and average # nonzeros ˆx 0 , where the empirical average is taken across 1000 independent trials. This process is repeated for values of η [0, 2], with results reported in Figure 1.
Researcher Affiliation Industry David Wipf DAVIDWIPF@GMAIL.COM Microsoft Research, Beijing
Pseudocode No The paper does not contain any structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide any concrete access information for open-source code.
Open Datasets No The paper describes generating synthetic data for its experiments ('First we generate a sparse vector x...', 'Next we generate a design matrix via Φ...', 'Finally we generate a data vector via y = Φx.'), not using a publicly available dataset.
Dataset Splits No The paper uses generated synthetic data for Monte Carlo experiments and evaluates metrics across independent trials, but does not specify traditional training/validation/test splits of a fixed dataset.
Hardware Specification No The paper does not provide any specific hardware details used for running experiments.
Software Dependencies No The paper mentions methods like VB-GSM, VB-BG, and Lasso estimator, but does not specify any software libraries or frameworks with version numbers used for implementation.
Experiment Setup Yes First we generate a sparse vector x with x 0 = 20 nonzero elements randomly located with iid N(0, 1) nonzero elements. Next we generate a design matrix via Φ = Pn i=1 1 iη uiv i , where each vector ui R50 and vi R100 are distributed iid with N(0, 1) elements. We then normalized columns of Φ to have unit ℓ2 norm. The exponent parameter η is chosen from the interval [0, 2]... Finally we generate a data vector via y = Φx. We then run various VB sparse estimation algorithms and evaluate them using two metrics... This process is repeated for values of η [0, 2]... (we chose α = 10 4).