Disambiguation of Weak Supervision leading to Exponential Convergence rates

Authors: Vivien A Cabannnes, Francis Bach, Alessandro Rudi

ICML 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We prove exponential convergence rates of our algorithm under classical learnability assumptions, and we illustrate the usefulness of our method on practical examples. We end this paper with a review of literature in Section 6, before showcasing the usefulness of our method on practical examples in Section 7, and opening on perspectives in Section 8.
Researcher Affiliation Academia 1Institut National de Recherche en Informatique et en Automatique Département d Informatique de l École Normale Supérieure PSL Research University.
Pseudocode No The paper describes the algorithms mathematically (Eq. 4 and Eq. 5) but does not present them in a structured pseudocode or algorithm block.
Open Source Code Yes All the code is available online https://github.com/ Vivien Cabannes/partial_labelling.
Open Datasets Yes Finally, we compare our algorithm, our baseline (13) and the baseline considered by Cabannes et al. (2020) on real datasets from the LIBSVM dataset (Chang & Lin, 2011).
Dataset Splits No We split fully supervised LIBSVM datasets into training and testing dataset. The paper mentions splitting into training and testing sets but does not provide specific details on the split percentages, sample counts for validation, or the methodology for creating these splits (e.g., random seed, cross-validation folds).
Hardware Specification No The paper does not provide any specific hardware details such as GPU or CPU models, or memory specifications used for running the experiments.
Software Dependencies Yes IBM. IBM ILOG CPLEX 12.7 User s Manual. IBM ILOG CPLEX Division, 2017.
Experiment Setup No For each methods... we consider weights 훼given by kernel ridge regression with Gaussian kernel, for which we optimized hyperparameters with cross-validation on the training set. The paper states that hyperparameters were optimized using cross-validation but does not provide the specific values of these hyperparameters or other training configurations.