Quiet: Faster Belief Propagation for Images and Related Applications

Authors: Yasuhiro Fujiwara, Dennis Shasha

IJCAI 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments show that our approach is significantly faster than existing approaches without sacrificing inference quality.
Researcher Affiliation Collaboration NTT Software Innovation Center, 3-9-11 Midori-cho Musashino-shi, Tokyo, 180-8585, Japan Department of Computer Science, New York University, 251 Mercer Street, New York, NY 10012, USA fujiwara.yasuhiro@lab.ntt.co.jp, shasha@cs.nyu.edu
Pseudocode Yes Algorithm 1 Quiet
Open Source Code No The paper does not provide any explicit statement or link for open-source code for the methodology it describes.
Open Datasets Yes We used Art, Moebius, Shopvac, Flowers, and Umbrella images obtained from the Middlebury Stereo Datasets1; their sizes are 1390 1110, 1390 1110, 2356 1996, 2772 1980, 2880 1980, and 2960 2016, respectively. The six images are shown in Figure 1. 1http://vision.middlebury.edu/stereo/data/
Dataset Splits No The paper uses standard benchmark datasets but does not provide specific training/validation/test splits, nor does it specify a cross-validation setup or random seed for splitting.
Hardware Specification Yes All experiments were conducted on a Linux 2.70 GHz Intel Xeon server.
Software Dependencies No The paper does not specify any software dependencies with version numbers.
Experiment Setup Yes In the experiments, we set the number of labels, K = 100, the number of iterations in each level, T = 50, the number of levels, B = 4, and the parameter of the Potts model, d = 20, by following the previous paper [Felzenszwalb and Huttenlocher, 2004].