Large-Margin Metric Learning for Constrained Partitioning Problems
Authors: Rémi Lajugie, Francis Bach, Sylvain Arlot
ICML 2014 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our experiments show how learning the metric can significantly improve performance on bioinformatics, video or image segmentation problems. |
| Researcher Affiliation | Academia | R emi Lajugie REMI.LAJUGIE@ENS.FR Sylvain Arlot SYLVAIN.ARLOT@ENS.FR Francis Bach FRANCIS.BACH@INRIA.FR D epartement d Informatique de l Ecole Normale Sup erieure, (CNRS/INRIA/ENS), Paris, France |
| Pseudocode | Yes | Algorithm 1 Dynamic programming for maximizing Tr(AM) such that M Mseq |
| Open Source Code | No | The paper does not provide an explicit statement or link to the source code for the methodology described. |
| Open Datasets | Yes | On the data from the Neuroblastoma dataset (Hocking et al., 2013), some caryotypes with changes of distribution were manually annotated. We consider the task of segmenting images of the Weizmann horses dataset (Borenstein & Ullman, 2004), using N = 20 training images with colour and dense SIFT features. In Table 2, we present analogous results for the Oxford flowers (Nilsback & Zisserman, 2006) dataset, for which the training set size is bigger: 150 images. |
| Dataset Splits | Yes | Using 4 shows for train, 3 for validation, 3 for test, we report below the test errors for each test show with the loss ℓ(smaller is better). |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., CPU/GPU models, memory) used for running its experiments. |
| Software Dependencies | No | The paper mentions general software or techniques but does not specify software dependencies with version numbers. |
| Experiment Setup | No | The paper provides details on features used (GIST, MFCC, etc.) and states that the structured SVM parameter was adjusted using a validation set, but it does not specify concrete hyperparameter values or comprehensive training configurations like learning rates, batch sizes, or optimizer details. |