Topological RANSAC for instance verification and retrieval without fine-tuning

Authors: Guoyuan An, Ju-hyeong Seon, Inkyu An, Yuchi Huo, Sung-eui Yoon

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our experimental results demonstrate that our method significantly outperforms SP, achieving stateof-the-art performance in non-fine-tuning retrieval. Furthermore, our approach can enhance performance when used in conjunction with fine-tuned features.
Researcher Affiliation Academia Guoyuan An1, Juhyung Seon1, In Kyu An1,4, Yuchi Huo2,3, and Sung-Eui Yoon1 1School of Computing, KAIST 2 State Key Lab of CAD&CG, Zhejiang University 3Zhejiang Lab 4ETRI, Electronics and Telecommunications Research Institute
Pseudocode Yes Algorithm 1 shows the overall pipeline of our method.
Open Source Code Yes Our code can be found through this link.
Open Datasets Yes Table 1: Results (% m AP) on the ROxf/RPar datasets and their large-scale versions ROxf+1M/RPar+1M, with both Medium and Hard evaluation protocols.
Dataset Splits No The paper refers to datasets like ROxford and RParis for evaluation and mentions 'non-fine-tuning retrieval' scenarios, but it does not explicitly provide training/test/validation dataset splits with specific percentages or sample counts for its experiments. It also discusses fine-tuning on GLD but doesn't detail its own splits.
Hardware Specification No The paper does not provide specific details about the hardware used to run its experiments, such as GPU or CPU models, memory, or cloud instance types.
Software Dependencies No The paper mentions its implementation as 'Python-based method' and compares its speed to 'C-implemented SP', but it does not specify any particular software dependencies with version numbers (e.g., specific libraries, frameworks, or solvers).
Experiment Setup Yes The paper provides some specific experimental setup details, such as: 'The threshold α is set as 0.2.' and 'For fairness, all methods rerank the top 100.'