Beyond Perturbations: Learning Guarantees with Arbitrary Adversarial Test Examples

Authors: Shafi Goldwasser, Adam Tauman Kalai, Yael Kalai, Omar Montasser

NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments As a proof of concept, we perform simple controlled experiments on the task of handwritten letter classification using lower-case English letters from the EMNIST dataset (Cohen et al. [2017]).
Researcher Affiliation Collaboration ShafiGoldwasser UC Berkeley, MIT Adam Tauman Kalai Microsoft Research Yael Tauman Kalai Microsoft Research, MIT Omar Montasser TTI Chicago
Pseudocode Yes Figure 2: The 햱햾헃햾햼헍헋허헇 algorithm takes labeled training examples and unlabeled test examples as input, and it outputs a selective classifier ℎ|푆that predicts ℎ(푥) for 푥 푆(and rejects all 푥 푆).
Open Source Code No The paper does not provide any explicit statement about releasing source code or a link to a code repository.
Open Datasets Yes EMNIST dataset (Cohen et al. [2017])
Dataset Splits No The paper mentions using the EMNIST dataset for experiments and describes two experimental setups, but it does not provide specific train/validation/test split percentages, sample counts, or explicit references to predefined splits for reproducibility.
Hardware Specification No The paper describes performing 'simple controlled experiments' but does not provide any specific hardware details such as GPU or CPU models, or memory specifications used for running these experiments.
Software Dependencies No The paper does not provide specific version numbers for any software libraries or dependencies used in the experiments.
Experiment Setup No The paper outlines two experimental setups for the handwritten letter classification task but does not provide specific details on hyperparameters, training configurations, or other system-level settings required to reproduce the experiments.