Regressive Virtual Metric Learning
Authors: Michaël Perrot, Amaury Habrard
NeurIPS 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Lastly, we evaluate our approach on several state of the art datasets. and Section 4 is dedicated to an empirical evaluation of our method on several widely used datasets. |
| Researcher Affiliation | Academia | Micha el Perrot, and Amaury Habrard Universit e de Lyon, Universit e Jean Monnet de Saint-Etienne, Laboratoire Hubert Curien, CNRS, UMR5516, F-42000, Saint-Etienne, France. {michael.perrot,amaury.habrard}@univ-st-etienne.fr |
| Pseudocode | Yes | Algorithm 1: Selecting S from a set of examples S. |
| Open Source Code | Yes | The closed-form implementation of RVML is freely available on the authors website. |
| Open Datasets | Yes | In this section, we evaluate our approach on 13 different datasets coming from either the UCI [19] repository or used in recent works in metric learning [8, 20, 21]. |
| Dataset Splits | Yes | For isolet, splice and svmguide1 we have access to a standard training/test partition, for the other datasets we use a 70% training/30% test partition, we perform the experiments on 10 different splits and we average the result. We set our regularization parameter λ with a 5-fold cross validation. |
| Hardware Specification | No | The paper does not explicitly describe the specific hardware (e.g., GPU/CPU models, memory) used for running the experiments. |
| Software Dependencies | No | The paper mentions algorithms and methods (e.g., 'Sinkhorn-Knopp algorithm', '1-nearest neighbor classifier', 'SCML', 'LMNN') but does not specify software dependencies with version numbers (e.g., Python 3.x, PyTorch 1.x). |
| Experiment Setup | Yes | We normalize the examples with respect to the training set by subtracting for each attribute its mean and dividing by 3 times its standard deviation. We set our regularization parameter λ with a 5-fold cross validation. After the metric learning step, we use a 1-nearest neighbor classifier to assess the performance of the metric and report the accuracy obtained. and with the parameter σ fixed as the mean of all pairwise training set Euclidean distances |