Online Selective Classification with Limited Feedback

Authors: Aditya Gangrade, Anil Kag, Ashok Cutkosky, Venkatesh Saligrama

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The theoretical exploration is complemented by illustrative experiments that implement our scheme on two benchmark datasets. We evaluate the performance of Algorithm 2 on two tasks CIFAR 10 [KH09], and GAS [Ver+12] see E for details of implementation, and here for the relevant code.
Researcher Affiliation Academia Aditya Gangrade Boston University gangrade@bu.edu Anil Kag Boston University anilkag@bu.edu Ashok Cutkosky Boston University ashok@cutkosky.com Venkatesh Saligrama Boston University srv@bu.edu
Pseudocode Yes Algorithm 1 VUE
Open Source Code Yes code to reproduce the same is made available at https://github.com/anilkagak2/Online-Selective-Classification
Open Datasets Yes We evaluate the performance of Algorithm 2 on two tasks CIFAR 10 [KH09], and GAS [Ver+12]
Dataset Splits No No explicit statement detailing specific training/validation/test dataset splits (e.g., percentages, sample counts, or explicit standard split citations) was found. It mentions using a 'training set' and 'test datasets' but lacks specific split information.
Hardware Specification No No specific hardware details (e.g., exact GPU/CPU models, processor types, or memory amounts) used for running the experiments are mentioned in the provided text.
Software Dependencies No No specific software dependencies with version numbers (e.g., 'Python 3.8, PyTorch 1.9') are explicitly mentioned in the provided text.
Experiment Setup Yes The hyperparameters (µ, t) provide control over various levels of accuracy and abstention. Concretely, we vary these linearly for 20 values of p [0.015, 0.285], and 10 values of ε [0.001, 0.046].