Hypergraph Propagation and Community Selection for Objects Retrieval

Authors: Guoyuan An, Yuchi Huo, Sung-eui Yoon

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiment results on ROxford and RParis show that our method significantly outperforms the existing query expansion and diffusion methods. Experimental result shows that our hypergraph propagation significantly outperforms the existing query expansion and diffusion methods. On the hard protocol of the ROxford dataset, our hypergraph-propagation-based approach achieves the impressive m APs of 73.0 and 60.5 with and without R1M distractors, respectively.
Researcher Affiliation Collaboration Guoyuan An1, Yuchi Huo2,3, and Sung-Eui Yoon1 1School of Computing, KAIST 2Zhejiang Lab 3 State Key Lab of CAD&CG, Zhejiang University
Pseudocode No The paper describes algorithms and models using prose and mathematical equations but does not include structured pseudocode or algorithm blocks.
Open Source Code Yes The code of this work is publicly available on https://sgvr.kaist.ac.kr/~guoyuan/hypergraph_propagation/
Open Datasets Yes We evaluate our methods on two well-known landmark retrieval benchmarks: revisited Oxford (ROxf) and revisited Paris (RPar) [25].
Dataset Splits No The paper describes the datasets (ROxf, RPar) and their sizes, query sets, and distractor sets, but does not provide explicit train/validation/test splits for model training or evaluation (e.g., percentages or sample counts for each split).
Hardware Specification Yes We conduct the offline process on 6 Intel(R) Core(TM) i9-9900K CPUs @ 3.60GHz and 896GB of RAM and implement the online hypergraph propagation and community selection on one CPU and 50GB of memory.
Software Dependencies No The paper mentions using the DELG model [5] but does not specify version numbers for any software, libraries, or frameworks used in the implementation or experimentation.
Experiment Setup Yes We find the nearest neighbor number K does not influence the final result a lot, and we report the performance of setting K as 200. For community selection, we set S, the number of images for calculating the uncertainty, as 20. When testing the performance of combining community selection and hypergraph propagation, we set the uncertainty threshold as 1, which we find is a good balance of the accuracy and computation overhead.