A General Method for Robust Learning from Batches
Authors: Ayush Jain, Alon Orlitsky
NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Theoretical | We develop a general framework of robust learning from batches, and determine the limits of both distribution estimation, and notably, classification, over arbitrary, including continuous, domains. Building on this framework, we derive the first robust agnostic: (1) polynomial-time distribution estimation algorithms for structured distributions... (2) classification algorithms... (3) computationally-efficient algorithms... |
| Researcher Affiliation | Academia | Ayush Jain and Alon Orlitsky Dept. of Electrical and Computer Engineering University of California, San Diego {ayjain, aorlitsky}@eng.ucsd.edu |
| Pseudocode | No | The paper refers to algorithms and proofs appearing in the appendix, and provides an overview of the filtering framework, but does not include structured pseudocode or algorithm blocks in the main body. |
| Open Source Code | No | The paper does not contain any statements or links indicating that source code for the described methodology is publicly available. |
| Open Datasets | No | The paper is theoretical and focuses on algorithm derivation and analysis of sample complexity. It does not mention specific datasets or provide access information for any training data. |
| Dataset Splits | No | The paper is theoretical and does not describe empirical experiments with training, validation, or test dataset splits. |
| Hardware Specification | No | The paper does not specify any hardware (CPU, GPU, etc.) used for experiments, as it is a theoretical work. |
| Software Dependencies | No | The paper does not specify any software dependencies with version numbers. It refers to existing algorithms (e.g., [ADLS17], [Maa94]), but not specific software implementations. |
| Experiment Setup | No | The paper is theoretical and does not describe an experimental setup, hyperparameters, or system-level training settings. |