Guide Local Feature Matching by Overlap Estimation
Authors: Ying Chen, Dihe Huang, Shang Xu, Jianlin Liu, Yong Liu365-373
AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Intensive experiments show that OETR can boost state of the art local feature matching performance substantially, especially for image pairs with small shared regions. The code will be publicly available at https://github.com/Abyss Gaze/OETR. |
| Researcher Affiliation | Collaboration | Ying Chen1*, Dihe Huang1,2*, Shang Xu1, Jianlin Liu1, Yong Liu1 1Tencent Youtu Lab 2Tsinghua University {mumuychen, shangxu, jenningsliu, choasliu}@tencent.com, hdh20@mails.tsinghua.edu.cn |
| Pseudocode | No | The paper describes its methods textually and with diagrams (e.g., Fig. 2, Fig. 3, Fig. 4) but does not include any explicitly labeled pseudocode or algorithm blocks. |
| Open Source Code | Yes | The code will be publicly available at https://github.com/Abyss Gaze/OETR. |
| Open Datasets | Yes | We train our overlap estimation model OETR on Mega Depth (Li and Snavely 2018) dataset. Image pairs are randomly sampled offline, with overlap ratio in [0.1, 0.7]. According to IMC2021 (Jin et al. 2021) evaluation requirements, we remove overlapping scenes with IMC s validation and test set from Mega Depth. |
| Dataset Splits | Yes | According to IMC2021 (Jin et al. 2021) evaluation requirements, we remove overlapping scenes with IMC s validation and test set from Mega Depth. We summarize the results of IMC2021 validation datasets in Tab.1. Mega Depth We split Mega Depth test set (with 10 scenes) into subsets according to the overlap scale ratio as in Eq. 3 for image pairs. We separate overlap scales into [1, 2), [2, 3), [3, 4), [4, + ) and combine [2, 3), [3, 4), [4, + ) as [2, + ) for image pairs with noticeable scale difference. |
| Hardware Specification | Yes | It converges after 48 hours of training on 2 NVIDIA-V100 GPUs with 35 epochs. |
| Software Dependencies | No | The paper mentions optimizers like 'AdamW' but does not specify version numbers for any software dependencies, libraries, or programming languages used in the experiments. |
| Experiment Setup | Yes | The loss weights λcon, λloc, λiou and λL1 are set to [1, 1, 0.5, 0.5] respectively. The model is trained using Adam W with weight decay of 10 4 and a batch size of 8. It converges after 48 hours of training on 2 NVIDIA-V100 GPUs with 35 epochs. To enable batched training, input images are resized to have their longer side being 1200 while image ratio is kept, followed by padding to 1216 (can be divided by 32) for both sides. |