Revisiting Unsupervised Local Descriptor Learning

Authors: Wufan Wang, Lei Zhang, Hua Huang

AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experimental results show that Hybrid Desc can efficiently learn local descriptors that surpass existing unsupervised local descriptors and even rival competitive supervised ones.
Researcher Affiliation Academia Wufan Wang1, Lei Zhang1, Hua Huang2 1School of Computer Science and Technology, Beijing Institute of Technology 2School of Artificial Intelligence, Beijing Normal University
Pseudocode Yes Algorithm 1: Model optimization with the ODC method
Open Source Code No The paper mentions using Py Torch (Paszke et al. 2019), Kornia Library (Riba et al. 2020), and Faiss library (Johnson, Douze, and J egou 2019) but does not provide a link or explicit statement for their own open-source code.
Open Datasets Yes The evaluation is conducted on the UBC Phototour (Brown, Hua, and Winder 2011), HPatches (Balntas et al. 2017), Heinly (Heinly, Dunn, and Frahm 2012) and W1BS (Mishkin et al. 2015) datasets.
Dataset Splits No The paper describes training and testing splits for datasets (e.g., “the dataset is split into six training-test combinations, in which one subset is used for training while the other two are used for testing” for UBC Phototour), but it does not explicitly provide details about a distinct validation dataset split with percentages or sample counts.
Hardware Specification Yes Training is done with Py Torch (Paszke et al. 2019) on an NVIDIA RTX 2080 GPU.
Software Dependencies No The paper mentions using Py Torch, Kornia Library, and Faiss library with citations to their respective papers, but it does not provide specific version numbers for these software dependencies.
Experiment Setup Yes The stochastic gradient descent (SGD) optimizer is adopted with an initial learning rate of 10, which is linearly decayed to zero. All models are trained for 60 epochs, with the first 10 epochs trained using the rule-based approach and the other 50 epochs trained using the clustering-based approach, unless otherwise stated. For hyperparameter search, the Adam optimizer (Kingma and Ba 2015) is used with a constant learning rate of 0.1. The magnitude range of scaling x/y, translation x/y, shear x/y and rotation are [0.5, 1.5], [ 0.5, 0.5], [ 0.5, 0.5] and [ 180 , 180 ]. The initial hyperparameter of each operation is set to 0.01 of its max magnitude. The cluster number C, weighting factor λ and distance ratio ρ are set to 100K, 0.02 and 0.8 respectively.