Learning Submodular Losses with the Lovasz Hinge
Authors: Jiaqian Yu, Matthew Blaschko
ICML 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We demonstrate the utility of this novel convex surrogate through a real world image labeling task. ... In this section, we consider a task in which multiple labels need to be predicted simultaneously and for which a submodular loss over labels is to be minimized. ... Microsoft COCO The Microsoft COCO dataset (Lin et al., 2014) is an image recognition, segmentation, and captioning dataset. ... Results We repeated each experiment 10 times with random sampling in order to obtain an estimate of the average performance. Table 1 and Table 2 show the cross comparison of average loss values (with standard error) using different loss functions during training and during testing for the COCO dataset. |
| Researcher Affiliation | Academia | Jiaqian Yu JIAQIAN.YU@CENTRALESUPELEC.FR Matthew B. Blaschko MATTHEW.BLASCHKO@INRIA.FR Centrale Sup elec & Inria, Grande Voie des Vignes, 92295 Chˆatenay-Malabry, France |
| Pseudocode | No | The paper does not contain any structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | Source code is available for download at https: //sites.google.com/site/jiaqianyu08/ lovaszhinge. |
| Open Datasets | Yes | Microsoft COCO The Microsoft COCO dataset (Lin et al., 2014) is an image recognition, segmentation, and captioning dataset. |
| Dataset Splits | No | The paper mentions 'training/validation set' and 'testing set' but does not specify distinct percentages or counts for a separate validation split from the combined training/validation data. |
| Hardware Specification | No | The paper does not provide specific hardware details such as GPU/CPU models, memory, or processing units used for running the experiments. |
| Software Dependencies | No | The paper mentions software like 'Overfeat' and 'Python', and algorithms like 'SVM' and 'cutting-plane optimization', but does not provide specific version numbers for any of these software dependencies. |
| Experiment Setup | No | The paper describes the general approach and loss functions used, and mentions the type of optimization ('one-slack cutting-plane optimization with ℓ2 regularization'), but does not provide specific hyperparameter values (e.g., learning rate, batch size, number of epochs) or detailed system-level training configurations. |