Learning Super-Features for Image Retrieval

Authors: Philippe Weinzaepfel, Thomas Lucas, Diane Larlus, Yannis Kalantidis

ICLR 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments on common landmark retrieval benchmarks validate that Super-features substantially outperform state-of-the-art methods when using the same number of features, and only require a significantly smaller memory footprint to match their performance.
Researcher Affiliation Industry Philippe Weinzaepfel, Thomas Lucas, Diane Larlus, and Yannis Kalantidis NAVER LABS Europe, Grenoble, France
Pseudocode No The paper describes the proposed method but does not include any explicitly labeled pseudocode or algorithm blocks.
Open Source Code Yes Code and models are available at: https://github.com/naver/FIRe.
Open Datasets Yes We use the Sf M-120k dataset (Radenovi c et al., 2018b) following the 551/162 3D model train/val split from Tolias et al. (2020). For testing, we evaluate instance-level search on the ROxford (Philbin et al., 2007) and the RParis (Philbin et al., 2008) datasets in their revisited version (Radenovi c et al., 2018a), with and without the 1 million distractor set called R1M.
Dataset Splits Yes We use the Sf M-120k dataset (Radenovi c et al., 2018b) following the 551/162 3D model train/val split from Tolias et al. (2020).
Hardware Specification No On our server, it took 157 seconds for HOW and 172 for FIRe, i.e. extraction for Super-features only requires 10% more wall-clock time.
Software Dependencies No The paper mentions building the codebase on HOW and using an Adam optimizer and ResNet50, but does not provide specific version numbers for software dependencies like Python, PyTorch/TensorFlow, or CUDA.
Experiment Setup Yes We train our model for 200 epochs on Sf M-120k using an initial learning rate of 3.10 5 and random flipping as data augmentation. We multiply the learning rate by a factor of 0.99 at each epoch, and use an Adam optimizer with a weight decay of 10 4. We use µ1 1.1 in Eq.(8) and weight Lsuper and Lattn with 0.02 and 0.1, respectively.