Sample Complexity of Learning Mahalanobis Distance Metrics

Authors: Nakul Verma, Kristin Branson

NeurIPS 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We experimentally validate how the basic principle of normregularization can help enhance the prediction quality even for existing metric learning algorithms on benchmark datasets (Section 5). Our experiments highlight that norm-regularization indeed helps learn weighting metrics that better adapt to the signal in data in high-noise regimes.
Researcher Affiliation Academia Nakul Verma Janelia Research Campus, HHMI verman@janelia.hhmi.org Kristin Branson Janelia Research Campus, HHMI bransonk@janelia.hhmi.org
Pseudocode No No explicit pseudocode or algorithm blocks are present in the paper. The methodology is described through mathematical formulations and textual explanations.
Open Source Code No The paper does not provide any explicit statements about releasing source code or links to a code repository.
Open Datasets Yes We use the UCI benchmark datasets for our experiments: IRIS (4 dim., 150 samples), WINE (13 dim., 178 samples) and IONOSPHERE (34 dim., 351 samples) datasets [11]. [11] K. Bache and M. Lichman. UCI machine learning repository, 2013.
Dataset Splits Yes Each noise-augmented dataset was randomly split between 70% training, 10% validation, and 20% test samples.
Hardware Specification No The paper does not provide specific details regarding the hardware specifications (e.g., CPU, GPU models, memory) used for running the experiments.
Software Dependencies No The paper mentions using specific metric learning algorithms like LMNN and ITML but does not provide details on specific software dependencies or their version numbers (e.g., programming languages, libraries, frameworks with versions).
Experiment Setup Yes Each noise-augmented dataset was randomly split between 70% training, 10% validation, and 20% test samples. We used the default settings for each algorithm. For regularized LMNN, we picked the best performing trade-off parameter Λ from {0, 0.1, 0.2, ..., 1} on the validation set. For regularized ITML, we seeded with the rank-one discriminating metric, i.e., we set the prior as the matrix with all zeros, except the diagonal entry corresponding to the most discriminating coordinate set to one. All the reported results were averaged over 20 runs.