Quantized Kernel Learning for Feature Matching

Authors: Danfeng Qin, Xuanli Chen, Matthieu Guillaumin, Luc V. Gool

NeurIPS 2014 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our experiments in Sec. 4 show that our kernels yield state-of-the-art performance on a standard feature matching benchmark and improve over kernels used in the literature for several descriptors, including one based on metric learning.
Researcher Affiliation Academia Danfeng Qin ETH Z urich Xuanli Chen TU Munich Matthieu Guillaumin ETH Z urich Luc Van Gool ETH Z urich {qind, guillaumin, vangool}@vision.ee.ethz.ch, xuanli.chen@tum.de
Pseudocode No The paper describes algorithms and procedures in text, but does not include formal pseudocode blocks or algorithms labeled as such.
Open Source Code Yes For further comparisons, our data and code are available online.3 See: http://www.vision.ee.ethz.ch/ qind/Quantized Kernel.html
Open Datasets Yes We evaluate our method using the dataset of Brown et al. [5]. It contains three sets of patches extracted from Liberty, Notre Dame and Yosemite...M=500k feature pairs are used for training on each dataset
Dataset Splits No The paper mentions training and testing sets, but does not explicitly specify a validation set or its split size. It refers to 'M=500k feature pairs are used for training on each dataset, with as many positives as negatives' and '100k pairs' for the test set.
Hardware Specification No The paper does not provide specific hardware specifications (e.g., CPU/GPU models, memory) used for running the experiments.
Software Dependencies No The paper does not provide specific software dependencies with version numbers.
Experiment Setup No The paper discusses some aspects of the experiment (e.g., number of intervals, groups), but does not provide specific details on hyperparameters like learning rates, batch sizes, or optimizer settings necessary to reproduce the training setup.