From Patches to Images: A Nonparametric Generative Model

Authors: Geng Ji, Michael C. Hughes, Erik B. Sudderth

ICML 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our denoising performance on standard benchmarks is superior to EPLL and comparable to the state-of-the-art, and we provide novel statistical justifications for common image processing heuristics. We also show accurate image inpainting results.
Researcher Affiliation Academia 1Brown University, Providence, RI, USA. 2Harvard University, Cambridge, MA, USA. 3University of California, Irvine, CA, USA.
Pseudocode No The paper describes methods and processes but does not include any explicitly labeled 'Pseudocode' or 'Algorithm' blocks or figures.
Open Source Code Yes Our open-source Python code is available online at github.com/bnpy/hdp-grid-image-restoration.
Open Datasets Yes Following EPLL, we train our HDP-Grid model using 400 clean training and validation images from the Berkeley segmentation dataset (BSDS, Martin et al. (2001)).
Dataset Splits No Following EPLL, we train our HDP-Grid model using 400 clean training and validation images from the Berkeley segmentation dataset (BSDS, Martin et al. (2001)).
Hardware Specification No To denoise a 512 512 pixel image on a modern laptop, our Python code for e DP inference with K = 449 clusters takes about 12 min.
Software Dependencies No Our open-source Python code is available online at github.com/bnpy/hdp-grid-image-restoration.
Experiment Setup Yes We fix δ = 0.5/255 to account for the quantization of image intensities to 8-bit integers. We initialize inference by creating K = 100 image-specific clusters with the k-means++ algorithm (Arthur & Vassilvitskii, 2007)... and refine with 50 iterations of coordinate descent updates... We set our annealing schedule for κ to match that used by the public EPLL code.