Dynamic Programming Bipartite Belief Propagation For Hyper Graph Matching

Authors: Zhen Zhang, Julian McAuley, Yong Li, Wei Wei, Yanning Zhang, Qinfeng Shi

IJCAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments show that the proposed methods outperform state-of-the-art techniques for hyper graph matching.
Researcher Affiliation Academia 1School of Computer Science & Engineering, Northwestern Polytechnical University, Xi an, China 2Computer Science and Engineering Department, University of California, San Diego, USA 3School of Computer Science, The University of Adelaide, Australia
Pseudocode Yes Algorithm 1: The DPMU Procedure
Open Source Code No The paper does not include an unambiguous statement that the authors are releasing the code for the work described in this paper, nor does it provide a direct link to a source-code repository.
Open Datasets Yes The CMU house dataset has been widely used in previous work to evaluate matching algorithms [Nguyen et al., 2015]. The Cars and Motorbikes dataset consists of 30 pairs of images of cars and 20 pairs of images of motorbikes from the Pascal 2007 dataset [Everingham et al., 2009].
Dataset Splits No The paper describes the datasets used (CMU House, Cars and Motorbikes) and how experiments were set up (e.g., matching images at different separations, adding outliers), but it does not specify explicit training, validation, and test splits (e.g., percentages, sample counts, or k-fold cross-validation).
Hardware Specification No The paper does not provide specific hardware details (such as exact GPU/CPU models, processor types, or memory amounts) used for running its experiments.
Software Dependencies No In our experiments, the proposed algorithms DP-BBP and DP-BBP-MLR are implemented in c++ and python. This mentions the programming languages but does not provide specific version numbers for compilers, interpreters, or any key libraries or solvers used.
Experiment Setup Yes We run at most 100 branch-and-bound iterations, and in each branch-and-bound, we run at most 10 iterations of DP-BBP or DP-BBP-MLR. The higher order potentials are computed as θc(yc) = exp( P3 i=1 d(αi c, αi yc))1(yc CQ) (16) For hyper graph matching methods, we only use higher order potentials and we use pairwise potentials only for pairwise matching methods, where we set the parameter τij,yiyj = 1/2500 [Leordeanu et al., 2009]. We set the parameter τij,yiyj = min(dij, dyiyj) as in previous work [Zhou and De la Torre, 2012; Zhang et al., 2016].