Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

Matching While Perceiving: Enhance Image Feature Matching with Applicable Semantic Amalgamation

Authors: Shihua Zhang, Zhenjie Zhu, Zizhuo Li, Tao Lu, Jiayi Ma

AAAI 2025 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments demonstrate that Sema Glue outperforms state-of-the-art methods across various applications such as homography estimation, relative pose estimation, and visual localization. ... Experiments are conducted across extensive tasks, and the reported state-of-the-art results demonstrate the superiority of Sema Glue. We also provide further analysis and ablation studies on the interpretability of our model.
Researcher Affiliation Academia 1Electronic Information School, Wuhan University, Wuhan 430072,China 2School of Computer Science and Engineering, Wuhan Institute of Technology, Wuhan 430205,China EMAIL, EMAIL, EMAIL, EMAIL, EMAIL
Pseudocode No The paper describes methods with mathematical formulations (e.g., Eqs. 1-21) and uses diagrams (Figure 2, Figure 3) to illustrate components like the Cross-Domain Alignment Block and Semantic-Aware Fusion Block, but it does not contain a dedicated pseudocode block or algorithm section.
Open Source Code Yes Code https://github.com/Ze-J-Zhu/Sema-Glue
Open Datasets Yes Following the approach outlined in (Lindenberger, Sarlin, and Pollefeys 2023; Zhang and Ma 2024b), we adopt the same datasets for two-stage training, Oxford and Paris (Radenovi c et al. 2018) for synthetic homography pre-training and Mgeda Depth (Li and Snavely 2018) for fine-tuning. Specifically, in the first stage, images are resized to 640 x 480 and we extract 512/1024 feature points with SP (De Tone, Malisiewicz, and Rabinovich 2018)/ALIKED (Zhao et al. 2023). The batch size is set to 48 with a learning rate of 0.0001, which is reduced by 20% every epoch after 20 epochs and the training is terminated after 40 epochs. In the second stage, images are resized to 1024 x 1024 with zero padding, feature points are extracted up to 2048. Batch size is 16 while the learning rate is 0.0001 for 20 epochs then decayed by a factor of 10 over 10 epochs until 40 epochs. We retain full resolution for the input image of Seg Next in the whole training process. Besides, the channel Cs is set to 480 for Eq. (6) and C is 256 for Eq. (7). All processes are conducted with a single RTX3090 GPU.
Dataset Splits Yes Following the approach outlined in (Lindenberger, Sarlin, and Pollefeys 2023; Zhang and Ma 2024b), we adopt the same datasets for two-stage training, Oxford and Paris (Radenovi c et al. 2018) for synthetic homography pre-training and Mgeda Depth (Li and Snavely 2018) for fine-tuning. ... Following the experimental protocols in (Sarlin et al. 2020), we select Mega Depth-1500 (Li and Snavely 2018) and YFCC100M (Thomee et al. 2016) datasets. ... Following (Chen et al. 2021), we incorporate different matching methods into the official Hloc (Sarlin et al. 2019) pipeline and assess them on Aachen Day-Night v1.0 (Sattler et al. 2018) and Inloc (Taira et al. 2018).
Hardware Specification Yes All processes are conducted with a single RTX3090 GPU.
Software Dependencies No The paper mentions using a 'pre-trained Seg Next encoder', 'Super Glue', 'Hloc pipeline', and 'COLMAP', but it does not specify version numbers for these software components or any other programming languages or libraries.
Experiment Setup Yes We set the number of stacked network layers to 9 (i.e., L = 9). ... In the first stage, images are resized to 640 x 480 and we extract 512/1024 feature points with SP (De Tone, Malisiewicz, and Rabinovich 2018)/ALIKED (Zhao et al. 2023). The batch size is set to 48 with a learning rate of 0.0001, which is reduced by 20% every epoch after 20 epochs and the training is terminated after 40 epochs. In the second stage, images are resized to 1024 x 1024 with zero padding, feature points are extracted up to 2048. Batch size is 16 while the learning rate is 0.0001 for 20 epochs then decayed by a factor of 10 over 10 epochs until 40 epochs. ... the channel Cs is set to 480 for Eq. (6) and C is 256 for Eq. (7).