Image Matching via Loopy RNN
Authors: Donghao Luo, Bingbing Ni, Yichao Yan, Xiaokang Yang
IJCAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on several image matching benchmarks demonstrate the great potential of the proposed method. |
| Researcher Affiliation | Academia | Donghao Luo, Bingbing Ni, Yichao Yan, Xiaokang Yang Shanghai Jiao Tong University {luo-donghao, nibingbing, yanyichao, xkyang}@sjtu.edu.cn |
| Pseudocode | No | The paper does not contain any pseudocode or clearly labeled algorithm blocks. |
| Open Source Code | No | The paper does not provide any statement or link indicating that the source code for their methodology is publicly available. |
| Open Datasets | Yes | The proposed Loopy RNN has been experimented on several image matching benchmark including UBS patch dataset [Winder et al., 2009] and Mikolajczyk dataset. |
| Dataset Splits | No | The paper describes training on one subset and testing on others, but does not explicitly provide percentages or counts for validation splits or mention cross-validation. It states 'the model is iteratively trained on one subset and tested on the other two subsets'. |
| Hardware Specification | No | The paper mentions models are trained on Caffe and optimized by SGD but provides no specific hardware details like GPU/CPU models or memory. |
| Software Dependencies | No | The paper states 'Our models are trained on Caffe [Jia et al., 2014]' but does not provide specific version numbers for Caffe or any other software dependencies. |
| Experiment Setup | Yes | Network Parameter and Training. The details of Feature Net are listed in Table 1. For Metric Net, there are 3 key factors which influence the performance of Loopy RNN model: 1) the weighting factor of monotonous loss λ; 2) the number of RNN nodes N (N {6, 8, 10, 12}); 3) the output dimension of LSTM node D (D {512, 1024, 1536, 2048}). Our models are trained on Caffe [Jia et al., 2014] and optimized by Stochastic Gradient Descent (SGD) with the batchsize 32. Learning rate is set to 0.01 at the beginning and decreased once every 1000 iterations. Our model converges to the steady state after about 70 epoches. |