Exploiting Class Learnability in Noisy Data

Authors: Matthew Klawonn, Eric Heim, James Hendler4082-4089

AAAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Testing our approach on a variety of data sets, we show our algorithm learns to focus on classes for which the model has low generalization error relative to strong baselines, yielding a classifier with good performance on learnable classes.
Researcher Affiliation Collaboration Matthew Klawonn Rensselaer Polytechnic Institute Dept. of Computer Science Troy, NY 12180 klawom@rpi.edu Eric Heim Air Force Research Laboratory Information Directorate Rome, NY 13441 eric.heim.1.us.af.mil James Hendler Rensselaer Polytechnic Institute Dept. of Computer Science Troy, NY 12180 hendler@cs.rpi.edu
Pseudocode Yes Algorithm 1 Bandit Supervised Learning.
Open Source Code No The paper does not explicitly state that the source code for their methodology is publicly available, nor does it provide a link to a repository.
Open Datasets Yes We start with the Cifar100 data set (Krizhevsky and Hinton 2009), a collection of 50k training images (10k of which we set aside for validation) and 10k test images, each of which are labeled as belonging to one of a hundred classes.
Dataset Splits Yes We start with the Cifar100 data set (Krizhevsky and Hinton 2009), a collection of 50k training images (10k of which we set aside for validation) and 10k test images, each of which are labeled as belonging to one of a hundred classes.
Hardware Specification No No specific hardware specifications (like GPU/CPU models or detailed cloud setups) used for running the experiments were provided in the paper.
Software Dependencies No The paper mentions software like Python and various frameworks but does not provide specific version numbers for these software dependencies, which are necessary for reproducibility.
Experiment Setup No The paper refers to using default values from a cited work for some parameters and generally describes the training process (e.g., selecting classes iteratively), but it does not explicitly state concrete hyperparameter values (like learning rate, batch size, number of epochs) or other system-level training settings within the main text.