TopicFM: Robust and Interpretable Topic-Assisted Feature Matching
Authors: Khang Truong Giang, Soohwan Song, Sungho Jo
AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on outdoor and indoor datasets show that our method outperforms other state-of-the-art methods, particularly in challenging cases. |
| Researcher Affiliation | Academia | Khang Truong Giang1, Soohwan Song2*, Sungho Jo1* 1 School of Computing, KAIST, Daejeon 34141, Republic of Korea 2 Intelligent Robotics Research Division, ETRI, Daejeon 34129, Republic of Korea |
| Pseudocode | No | The paper describes its methodology in text and mathematical formulations but does not include any explicitly labeled 'Pseudocode' or 'Algorithm' blocks. |
| Open Source Code | Yes | Source code for the proposed method is publicly available. |
| Open Datasets | Yes | We trained the proposed network model on the Mega Depth dataset (Li and Snavely 2018) |
| Dataset Splits | No | The paper mentions training on the Mega Depth dataset and using test sets from Mega Depth and Scan Net (each with 1500 image pairs), but it does not explicitly provide the training and validation split percentages or sample counts for the datasets used in their experiments. |
| Hardware Specification | Yes | Compared with state-of-the-art transformer-based models (Sarlin et al. 2020; Sun et al. 2021) (e.g., Lo FTR (2021) requires approximately 19GB of GPU), our model is much more efficient. Therefore, we used only four GPUs with 11GB of memory to train the model with a batch size of 4. |
| Software Dependencies | No | The paper states 'We implemented our network model in Py Torch' but does not specify a version number for PyTorch or any other software dependencies. |
| Experiment Setup | Yes | We implemented our network model in Py Torch, with an initial learning rate of 0.01. For the network hyperparameters, we set the number of topics K to 100, threshold of coarse match selection τ to 0.2, and number of covisible topics for feature augmentation Kco to 6. |