Collaborative PAC Learning

Authors: Avrim Blum, Nika Haghtalab, Ariel D. Procaccia, Mingda Qiao

NeurIPS 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical We design learning algorithms with O(ln(k)) and O(ln2(k)) overhead in the personalized and centralized variants our model. This gives an exponential improvement upon the naïve algorithm that does not share information among players. We complement our upper bounds with an Ω(ln(k)) overhead lower bound, showing that our results are tight up to a logarithmic factor.
Researcher Affiliation Academia Avrim Blum Toyota Technological Institute at Chicago Chicago, IL 60637 avrim@ttic.edu Nika Haghtalab Computer Science Department Carnegie Mellon University Pittsburgh, PA 15213 nhaghtal@cs.cmu.edu Ariel D. Procaccia Computer Science Department Carnegie Mellon University Pittsburgh, PA 15213 arielpro@cs.cmu.edu Mingda Qiao Institute for Interdisciplinary Information Sciences Tsinghua University Beijing, China 100084 qmd14@mails.tsinghua.edu.cn
Pseudocode Yes Algorithm 1 PERSONALIZED LEARNING and Algorithm 2 CENTRALIZED LEARNING are explicitly labeled algorithm blocks that provide structured pseudocode.
Open Source Code No The paper does not include any statements about releasing source code or provide links to a code repository.
Open Datasets No The paper is theoretical and defines conceptual distributions (D1, ..., Dk) and hypothesis classes (F). It does not use specific named datasets or provide any access information for a public dataset.
Dataset Splits No As a theoretical paper, it does not conduct experiments with actual data, and therefore does not discuss training/test/validation dataset splits or cross-validation.
Hardware Specification No The paper does not mention any specific hardware used for running experiments, which is consistent with its theoretical nature.
Software Dependencies No The paper does not list any specific software dependencies with version numbers.
Experiment Setup No The paper discusses theoretical parameters (ϵ, δ) and sample complexity but does not provide concrete experimental setup details such as hyperparameters or system-level training settings.