Collaborative Learning with Different Labeling Functions
Authors: Yuyang Deng, Mingda Qiao
ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Theoretical | We give a learning algorithm based on Empirical Risk Minimization (ERM) on a natural augmentation of the hypothesis class, and the analysis relies on an upper bound on the VC dimension of this augmented class. In terms of the computational efficiency, we show that ERM on the augmented hypothesis class is NP-hard, which gives evidence against the existence of computationally efficient learners in general. On the positive side, for two special cases, we give learners that are both sample- and computationally-efficient. |
| Researcher Affiliation | Academia | 1Pennsylvania State University, State College, PA, USA 2University of California, Berkeley, Berkeley, CA, USA. |
| Pseudocode | Yes | Algorithm 1 Collaborative Learning via Approximate Coloring |
| Open Source Code | No | The paper does not include an explicit statement about releasing source code or a link to a code repository. |
| Open Datasets | No | The paper defines abstract 'data distributions D1, D2, . . . , Dn' for theoretical analysis and does not refer to specific, publicly available datasets for empirical training or evaluation. |
| Dataset Splits | No | This theoretical paper does not conduct empirical experiments with real datasets, therefore, it does not define training, validation, or test splits. |
| Hardware Specification | No | This theoretical paper does not describe the hardware used for any experiments. |
| Software Dependencies | No | This theoretical paper does not specify software dependencies with version numbers. |
| Experiment Setup | No | This theoretical paper describes algorithms and their properties but does not detail an experimental setup with hyperparameters or specific training settings. |