Reliable learning in challenging environments

Authors: Maria-Florina F. Balcan, Steve Hanneke, Rattana Pukdee, Dravyansh Sharma

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical In this work, we consider the design and analysis of reliable learners in challenging test-time environments as encountered in modern machine learning problems: namely adversarial test-time attacks (in several variations) and natural distribution shifts. In this work, we provide a reliable learner with provably optimal guarantees in such settings. We discuss computationally feasible implementations of the learner and further show that our algorithm achieves strong positive performance guarantees on several natural examples: for example, linear separators under log-concave distributions or smooth boundary classifiers under smooth probability distributions.
Researcher Affiliation Academia Maria-Florina Balcan Carnegie Mellon University ninamf@cs.cmu.edu Steve Hanneke Purdue University steve.hanneke@gmail.com Rattana Pukdee Carnegie Mellon University rpukdee@cs.cmu.edu Dravyansh Sharma Carnegie Mellon University dravyans@cs.cmu.edu
Pseudocode No The paper describes algorithms conceptually and provides mathematical formulations, but it does not include a clearly labeled 'Pseudocode' or 'Algorithm' block with structured steps.
Open Source Code No The paper does not contain any statements about releasing open-source code for the methodology described, nor does it provide any links to a code repository.
Open Datasets No The paper is theoretical, discussing concepts like 'sample S = {(xi, yi)}m' and 'distribution D over X Y' without referring to specific named public datasets (e.g., CIFAR-10, MNIST) or providing access information for any data.
Dataset Splits No The paper is theoretical and does not mention any training, validation, or test dataset splits.
Hardware Specification No The paper is theoretical and does not mention any specific hardware specifications (e.g., GPU models, CPU types) used for running experiments.
Software Dependencies No The paper is theoretical and does not list any specific software dependencies or version numbers.
Experiment Setup No The paper is theoretical and does not provide specific experimental setup details, hyperparameters, or system-level training settings.