Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Large-Margin Metric Learning for Constrained Partitioning Problems
Authors: Rémi Lajugie, Francis Bach, Sylvain Arlot
ICML 2014 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our experiments show how learning the metric can significantly improve performance on bioinformatics, video or image segmentation problems. |
| Researcher Affiliation | Academia | R emi Lajugie EMAIL Sylvain Arlot EMAIL Francis Bach EMAIL D epartement d Informatique de l Ecole Normale Sup erieure, (CNRS/INRIA/ENS), Paris, France |
| Pseudocode | Yes | Algorithm 1 Dynamic programming for maximizing Tr(AM) such that M Mseq |
| Open Source Code | No | The paper does not provide an explicit statement or link to the source code for the methodology described. |
| Open Datasets | Yes | On the data from the Neuroblastoma dataset (Hocking et al., 2013), some caryotypes with changes of distribution were manually annotated. We consider the task of segmenting images of the Weizmann horses dataset (Borenstein & Ullman, 2004), using N = 20 training images with colour and dense SIFT features. In Table 2, we present analogous results for the Oxford flowers (Nilsback & Zisserman, 2006) dataset, for which the training set size is bigger: 150 images. |
| Dataset Splits | Yes | Using 4 shows for train, 3 for validation, 3 for test, we report below the test errors for each test show with the loss ℓ(smaller is better). |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., CPU/GPU models, memory) used for running its experiments. |
| Software Dependencies | No | The paper mentions general software or techniques but does not specify software dependencies with version numbers. |
| Experiment Setup | No | The paper provides details on features used (GIST, MFCC, etc.) and states that the structured SVM parameter was adjusted using a validation set, but it does not specify concrete hyperparameter values or comprehensive training configurations like learning rates, batch sizes, or optimizer details. |