On the Complexity of Learning from Label Proportions

Authors: Benjamin Fish, Lev Reyzin

IJCAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical We formalize a simple version of the setting, and we compare the computational complexity of learning in this model to classical PAC learning. Perhaps surprisingly, we show that what can be learned efficiently in this model is a strict subset of what may be leaned efficiently in PAC, under standard complexity assumptions. We give a characterization in terms of VC dimension, and we show that there are non-trivial problems in this model that can be efficiently learned. We also give an algorithm that demonstrates the feasibility of learning under well-behaved distributions.
Researcher Affiliation Academia Benjamin Fish and Lev Reyzin Department of Mathematics, Statistics, and Computer Science University of Illinois at Chicago, Chicago, IL 60022 bfish3@uic.edu, lreyzin@uic.edu
Pseudocode No The paper includes theoretical definitions, theorems, and proofs but does not provide any pseudocode or algorithm blocks.
Open Source Code No The paper does not provide any links to source code or explicitly state that code is available.
Open Datasets No This is a theoretical paper. It discusses abstract distributions and sample complexities but does not use or refer to any specific publicly available datasets.
Dataset Splits No This paper is theoretical and does not involve empirical experiments with dataset splits for training, validation, or testing.
Hardware Specification No This is a theoretical paper and does not mention any hardware specifications used for experiments.
Software Dependencies No This is a theoretical paper and does not mention any specific software dependencies or versions.
Experiment Setup No This paper is theoretical and does not describe an experimental setup with hyperparameters or training settings.