Learnability with Indirect Supervision Signals

Authors: Kaifu Wang, Qiang Ning, Dan Roth

NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical We develop a unified theoretical framework for multi-class classification when the supervision is provided by a variable that contains nonzero mutual information with the gold label. Our theory introduces a novel concept called separation, which characterizes the learnability and generalization bounds. All proofs of the theoretical results are presented in the supplementary material. Our work mostly focuses on theoretical aspects of learning.
Researcher Affiliation Collaboration Kaifu Wang University of Pennsylvania kaifu@sas.upenn.edu Qiang Ning Amazon qning@amazon.com Dan Roth University of Pennsylvania danroth@seas.upenn.edu
Pseudocode No The paper presents theoretical concepts, theorems, and proofs, but does not include any pseudocode or algorithm blocks.
Open Source Code No The paper is theoretical and does not mention or provide access to any open-source code for its framework or methods.
Open Datasets No The paper is theoretical and defines 'training set S' abstractly as '{(x(i), o(i))}m i=1 contains independent samples of X and O' for theoretical analysis. It does not refer to any specific, publicly available dataset with access information.
Dataset Splits No The paper is theoretical and does not describe any specific training, validation, or test dataset splits for empirical reproduction.
Hardware Specification No The paper is purely theoretical and does not mention any hardware specifications used for running experiments.
Software Dependencies No The paper is purely theoretical and does not list any software dependencies with specific version numbers.
Experiment Setup No The paper is purely theoretical and does not provide details on experimental setup, hyperparameters, or training configurations.