Structured Prediction with Partial Labelling through the Infimum Loss

Authors: Vivien Cabannnes, Alessandro Rudi, Francis Bach

ICML 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments confirm the superiority of the proposed approach over commonly used baselines. and Finally, we test our method against some simple baselines, on synthetic and real examples. and In this section, we will apply Eq. (7) to some synthetic and real datasets from different prediction problems and compared with the average estimator presented in the section above, used as a baseline.
Researcher Affiliation Academia Vivien Cabannes 1 Alessandro Rudi 1 Francis Bach 1 1INRIA Département d Informatique de l École Normale Supérieure PSL Research University, Paris, France.
Pseudocode No The paper provides mathematical formulations of its algorithm (e.g., Eq. 7), but it does not include a clearly labeled 'Pseudocode' or 'Algorithm' block with structured procedural steps.
Open Source Code Yes Code is available online.2 2https://github.com/Vivien Cabannes/ partial_labelling
Open Datasets Yes To compare IL and AC, we used LIBSVM datasets (Chang & Lin, 2011) on which we corrupted labels to simulate partial labelling. and Yet, when labels are unbalanced, such as in the dna and svmguide2 datasets
Dataset Splits No The paper mentions 'eight-fold cross-validation' as a splitting strategy for evaluation, but it does not specify explicit training, validation, and test dataset splits with percentages, sample counts, or references to predefined validation sets.
Hardware Specification No No specific hardware details (e.g., GPU/CPU models, memory) used for the experiments are mentioned in the paper.
Software Dependencies No The paper mentions 'Code is available online' but does not specify any software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow versions) required for replication.
Experiment Setup No The paper mentions some experimental settings, such as using a 'Gaussian kernel' for classification experiments and describing corruption parameters for datasets, but it lacks specific hyperparameters (e.g., learning rate, batch size, number of epochs, optimizer details) necessary for full replication of the training process.