BCLNet: Bilateral Consensus Learning for Two-View Correspondence Pruning
Authors: Xiangyang Miao, Guobao Xiao, Shiping Wang, Jun Yu
AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments results demonstrate that our network not only surpasses state-of-the-art methods on benchmark datasets but also showcases robust generalization abilities across various feature extraction techniques. Noteworthily, BCLNet obtains significant improvement gains over the second best method on unknown outdoor dataset, and obviously accelerates model training speed. and We compare BCLNet with RANSAC (Fischler and Bolles 1981) and recent state-of-the-art learning-based methods for both known and unknown outdoor scenes and In this section, we conduct ablation studies on the unknown scene of YFCC100M to verify the role of each key component in our network. |
| Researcher Affiliation | Academia | 1School of Electronics and Information Engineering, Tongji University, 2College of Computer and Data Science, Fuzhou University, 3School of Computer Science and Technology, Hangzhou Dianzi University |
| Pseudocode | No | The paper does not include pseudocode or clearly labeled algorithm blocks. |
| Open Source Code | No | The paper does not provide any explicit statement or link indicating that the source code for the described methodology is publicly available. |
| Open Datasets | Yes | For the outdoor dataset, we utilized Yahoo s YFCC100M (Thomee et al. 2016), a vast collection containing 100 million pieces of multimedia data. As for the indoor setting, we relied on SUN3D (Xiao, Owens, and Torralba 2013), which is an RGBD video dataset encompassing entire rooms. |
| Dataset Splits | No | The paper states 'Followed the data division approach outlined in (Zhang et al. 2019), we train all models at the same training setting to ensure an equitable comparison', but does not explicitly provide specific training, validation, or test dataset splits (e.g., percentages or sample counts). |
| Hardware Specification | No | The paper does not provide specific details about the hardware (e.g., GPU, CPU models, memory) used for running the experiments. |
| Software Dependencies | No | The paper states 'All networks are implemented in Py Torch (Paszke et al. 2019) and trained using the Adam optimizer (Kingma and Ba 2014)', but does not provide specific version numbers for PyTorch or other software dependencies. |
| Experiment Setup | Yes | Throughout both pruning modules, we increase the feature dimension d to 128. For the first pruning module, we use the initial set of correspondences as input and the number of k neighbors is empirically set to 9. For the second pruning module, we set k to 6 and use the weights predicted in the previous module along with the pruned correspondences set as input. Within the BCMA layer and BCR layer of the two pruning modules, we set the number of groups g to 3 and 2, respectively. Additionally, in the Order-Aware block, we set the cluster number to 150 for outdoor scenes and 250 for indoor scenes. All networks are implemented in Py Torch (Paszke et al. 2019) and trained using the Adam optimizer (Kingma and Ba 2014) with an initial learning rate of 10 3 and a batch size of 32. The training process consists of 500k iterations. In Equation 9, the weight λ is initialized to 0, and then fixed at 0.5 after the first 20k iterations. |