Efficient PAC Learning from the Crowd with Pairwise Comparisons

Authors: Shiwei Zeng, Jie Shen

ICML 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical Our main algorithmic contributions are a comparisonequipped labeling scheme that can faithfully recover the true labels of a small set of instances, and a label-efficient filtering process that in conjunction with the small labeled set can reliably infer the true labels of a large instance set. The paper presents theoretical algorithms (e.g., Algorithm 1-4), mathematical proofs (theorems, lemmas, propositions), and complexity analysis, with no empirical evaluation on datasets reported for their own work.
Researcher Affiliation Academia 1Department of Computer Science, Stevens Institute of Technology, Hoboken, New Jersey, USA. Correspondence to: Shiwei Zeng <szeng4@stevens.edu>, Jie Shen <jie.shen@stevens.edu>.
Pseudocode Yes Algorithm 1 COMPARE-AND-LABEL; Algorithm 2 Main Algorithm; Algorithm 3 ANTI-ANTI-CONCENTRATE; Algorithm 4 FILTER.
Open Source Code No The paper does not contain any statement about releasing its source code or provide links to a code repository for the described methodology.
Open Datasets No The paper is theoretical and does not conduct experiments with specific datasets; therefore, it does not provide information about publicly available training data.
Dataset Splits No The paper is theoretical and does not describe empirical experiments with data splits for training, validation, or testing.
Hardware Specification No The paper is theoretical and does not describe any specific hardware used for running experiments.
Software Dependencies No The paper is theoretical and does not specify any software dependencies with version numbers that would be required to replicate experiments.
Experiment Setup No The paper is theoretical and describes algorithms and their theoretical properties, but it does not provide specific experimental setup details like hyperparameters or training configurations.