On Batch Teaching with Sample Complexity Bounded by VCD

Authors: Farnam Mansouri, Hans Simon, Adish Singla, Sandra Zilles

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical This paper is the first to provide an upper bound of VCD on a batch teaching complexity parameter. This parameter, called STDmin, is introduced here as a model of teaching that intuitively incorporates a notion of importance of an example for a concept. In designing the STDmin teaching model, we argue that the standard notion of collusion-freeness from the literature may be inadequate for certain applications; we hence propose three desirable properties of teaching complexity and demonstrate that they are satisfied by STDmin. Our main results are that the corresponding complexity parameter is upper-bounded by both RTD and VCD. This makes our paper the first to provide an upper bound of VCD or O(VCD) on a complexity parameter for batch teaching.
Researcher Affiliation Academia Farnam Mansouri University of Toronto frnm.mansouri@gmail.com Hans U. Simon Max Planck Institute for Informatics hsimon@mpi-inf.mpg.de Adish Singla Max Planck Institute for Software Systems adishs@mpi-sws.org Sandra Zilles University of Regina zilles@cs.uregina.ca
Pseudocode No The paper describes algorithmic steps in prose within the text (e.g., 'Next, we define Tk for k > 1 by the following algorithm. 1. For j = k, let Tk(Cj) = Tk 1(Cj). 2. To define Tk(Ck), initialize Tk(Ck) := T(Ck).'), but it does not present them in a formally structured 'Pseudocode' or 'Algorithm' block or figure.
Open Source Code No The paper does not contain any statements about releasing source code for the described methodology, nor does it provide links to any code repositories.
Open Datasets No The paper uses illustrative examples such as 'The concept class Cpair u [Zilles et al., 2011], for the case u = 3' in Table 1, but it does not refer to or provide access information for any publicly available datasets used for training or evaluation in the context of empirical experiments.
Dataset Splits No The paper is theoretical and does not describe empirical experiments involving dataset splits for training, validation, or testing.
Hardware Specification No The paper focuses on theoretical concepts and does not describe any computational experiments; therefore, it does not provide any hardware specifications.
Software Dependencies No The paper is theoretical and does not describe software implementations or dependencies with version numbers.
Experiment Setup No The paper is purely theoretical and does not include any experimental setup details such as hyperparameters or system-level training settings.