Bayesian Dictionary Learning with Gaussian Processes and Sigmoid Belief Networks

Authors: Yizhe Zhang, Ricardo Henao, Chunyuan Li, Lawrence Carin

IJCAI 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Applications to image denoising, inpainting and depth-information restoration demonstrate that the proposed model outperforms other leading Bayesian dictionary learning approaches. We present experiments on two sets of images. The results on gray-scale images for denoising and inpainting tasks highlight how characterization of spatial structure improves results.
Researcher Affiliation Academia Duke University, Durham NC {yz196,rhenao,chunyuan.li,lcarin}@duke.edu
Pseudocode No No structured pseudocode or algorithm blocks were found in the paper.
Open Source Code No The paper does not provide an unambiguous statement or link regarding the public availability of its source code.
Open Datasets Yes We applied our methods to the 30 images of the Middlebury stereo dataset (Scharstein and Szeliski, 2002; Lu et al., 2014).
Dataset Splits No The paper describes MCMC iterations and sample collection for image reconstruction, but does not provide explicit train/validation/test dataset splits with percentages, sample counts, or predefined citations for data partitioning.
Hardware Specification Yes All the experiments were conducted on a single machine with two 2.7 GHz processors and 12 GB RAM. For each MCMC iteration, computations were parallelized w.r.t. dictionary elements using a desktop GPU.
Software Dependencies No The paper states 'code written in Matlab and C++', but does not provide specific version numbers for these or any other key software components or libraries.
Experiment Setup Yes The hyper-parameters controlling Gaussian distribution variances, i.e., σb and σλ, were all set to 0.1. As suggested in Zhou et al. (2009), the hyper-parameters for the inverse Gamma distributions (the priors for σw and σ") were set to {10^-6, 10^-6}. Dictionary sizes in both GPFA and GP-SBN-FA are initially set to 128. In GP-SBN-FA, we use a one-layer SBN with the number of top-layer binary units L set to half the size of the dictionary M.