Learning deep graph matching with channel-independent embedding and Hungarian attention
Authors: Tianshu Yu, Runzhong Wang, Junchi Yan, Baoxin Li
ICLR 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | The empirical results on three public benchmarks shows that the two proposed techniques are orthogonal and beneficial to existing techniques. Specifically, on the one hand, our CIE module can effectively boost the accuracy by exploring the edge attributes which otherwise are not considered in state-of-the-art deep graph matching methods; on the other hand, our Hungarian attention mechanism also shows generality and it is complementary to existing graph matching loss. Experiments are conducted on three benchmarks widely used for learning-based graph matching: CUB2011 dataset (Welinder et al., 2010) following the protocol in (Choy et al., 2016), Pascal VOC keypoint matching (Everingham et al., 2010; Bourdev & Malik, 2009) which is challenging and Willow Object Class dataset (Cho et al., 2013). |
| Researcher Affiliation | Academia | Arizona State University Shanghai Jiao Tong University |
| Pseudocode | No | No structured pseudocode or algorithm blocks were found. |
| Open Source Code | No | The paper does not provide any statement or link indicating the release of open-source code for the described methodology. |
| Open Datasets | Yes | Experiments are conducted on three benchmarks widely used for learning-based graph matching: CUB2011 dataset (Welinder et al., 2010) following the protocol in (Choy et al., 2016), Pascal VOC keypoint matching (Everingham et al., 2010; Bourdev & Malik, 2009) which is challenging and Willow Object Class dataset (Cho et al., 2013). |
| Dataset Splits | No | The paper mentions training and testing phases and evaluates accuracy on testing samples, but does not explicitly provide details about a dedicated validation split or its size/methodology. It mentions 'standard splits' for CUB2011, but without specific details. |
| Hardware Specification | No | The paper does not provide specific details about the hardware (e.g., GPU models, CPU types, memory) used for running its experiments. It only mentions general computing operations without hardware specifications. |
| Software Dependencies | No | The paper mentions using VGG16 and SGD optimizer but does not specify version numbers for these or any other software libraries or frameworks used in the implementation. |
| Experiment Setup | Yes | For training, batch size is set to 8. We employ SGD optimizer (Bottou, 2010) with momentum 0.9. Two CIE layers are stacked after VGG16. where controlling parameters α = 0.75 and γ = 2 in our setting. We also design a margin loss (Margin) with Hungarian attention under a max-margin rule... where we set the margin value β = 0.2. |