Fast Co-Training under Weak Dependence via Stream-Based Active Learning

Authors: Ilias Diakonikolas, Mingchen Ma, Lisheng Ren, Christos Tzamos

ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical This work presents theoretical results on certain topics of co-training and active learning. The goal is to advance the field of Machine Learning.
Researcher Affiliation Academia 1Department of Computer Sciences, University of Wisconsin-Madison, Madison, USA 2University of Athens and Archimedes AI, Athens, Greece. Correspondence to: Mingchen Ma <mingchen@cs.wisc.edu>, Lisheng Ren <lren29@wisc.edu>.
Pseudocode Yes Algorithm 1 REDUCTION(A1, A2) (Efficient Black-Box Reduction from Co-Training to Online Learning)... Algorithm 2 LEARNK-INTERVAL (Efficient co-training k intervals)... Algorithm 3 CO-HALVING(H) (Co-training VC classes via Halving)... Algorithm 4 LEARNK-INTERVAL (Efficient co-training k intervals)... Algorithm 5 Co-training Halfspaces without Margin with Label Queries... Algorithm 6 Subroutine for Co-training Partial Classifier using Label Queries
Open Source Code No The paper is theoretical and does not mention providing open-source code for the methodology described.
Open Datasets No The paper is theoretical and does not describe experiments performed on publicly available datasets.
Dataset Splits No The paper is theoretical and does not describe experiments with training, validation, or test dataset splits.
Hardware Specification No The paper is theoretical and does not mention any specific hardware used for experiments.
Software Dependencies No The paper is theoretical and does not mention specific software dependencies with version numbers.
Experiment Setup No The paper is theoretical and does not describe an experimental setup with hyperparameters or training settings.