Interactive Structure Learning with Structural Query-by-Committee

Authors: Christopher Tosh, Sanjoy Dasgupta

NeurIPS 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this work, we introduce interactive structure learning, a framework that unifies many different interactive learning tasks. We present a generalization of the queryby-committee active learning algorithm for this setting, and we study its consistency and rate of convergence, both theoretically and empirically, with and without noise. In the appendix, we give rates of convergence in terms of a shrinkage coefficient, present experiments on a variety of interactive learning tasks, and give an overview of related work.
Researcher Affiliation Academia Christopher Tosh Columbia University c.tosh@columbia.edu Sanjoy Dasgupta UC San Diego dasgupta@cs.ucsd.edu
Pseudocode Yes Algorithm 1 STRUCTURAL QBC; Algorithm 2 ROBUST QUERY SELECTION
Open Source Code No The paper does not contain any explicit statements about releasing source code, nor does it provide links to a code repository.
Open Datasets Yes In the appendix, we provide experiments on both interactive clustering and active learning tasks. [27] M. Lichman. UCI machine learning repository, 2013.
Dataset Splits No The paper mentions experiments and specific tasks but does not provide details on training, validation, and test dataset splits (e.g., percentages, sample counts, or cross-validation setup) in the main text.
Hardware Specification No The paper discusses the theoretical framework and algorithmic aspects of the research, mentioning empirical evaluations, but does not provide specific details about the hardware used (e.g., GPU/CPU models, memory) to run the experiments.
Software Dependencies No The paper does not list specific software dependencies with version numbers (e.g., Python 3.8, PyTorch 1.9, CPLEX 12.4) required to replicate the experiments.
Experiment Setup No The paper describes the algorithms and theoretical properties but does not provide specific experimental setup details such as hyperparameter values (learning rate, batch size, epochs), optimizer settings, or other training configurations.