Multiclass Boosting: Simple and Intuitive Weak Learning Criteria

Authors: Nataly Brukhim, Amit Daniely, Yishay Mansour, Shay Moran

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical We study a generalization of boosting to the multiclass setting. We introduce a weak learning condition... We give a simple and efficient boosting algorithm... In addition, we utilize our new boosting technique in several theoretical applications... We give a simple weak learning assumption that retains the notion of weak learnability as "slightly-better-than-random-guess" from the binary case. Furthermore, we give an efficient multiclass boosting algorithm, as formally stated in Theorem 1 below.
Researcher Affiliation Collaboration Nataly Brukhim Department of Computer Science Princeton University Amit Daniely Department of Computer Science Hebrew University Google Research Yishay Mansour Department of Computer Science Tel Aviv University Google Research Shay Moran Faculty of Mathematics Faculty of Computer Science Faculty of Data and Decision Sciences Technion Google Research
Pseudocode Yes Our boosting algorithm is given in Section 3 (Algorithm 3). Algorithm 1 Boosting via Hedge Given: Training data S (X [k])m, parameter η > 0. Algorithm 2 Initial hint Given: S (X Y)m, parameters m0, p > 0. Algorithm 3 Recursive Boosting Given: Training data S (X Y)m, edge γ > 0, parameters T, η, p > 0. Algorithm 4 Boosting Weak-to-List Learning Given: training data S (X Y)m, edge γ > 0, parameters T, η > 0. Algorithm 5 The one-inclusion list algorithm for H YX Input: An H-realizable sample U = (x1, y1), . . . , (xn, yn) . Algorithm 6 k-List PAC Learning for a class H YX with dk DS = d < Given: Training data S (X Y)m.
Open Source Code No The paper does not contain any explicit statements about releasing source code for the described methodology, nor does it provide links to a code repository.
Open Datasets No The paper is purely theoretical and does not conduct experiments on specific datasets. The examples provided, such as "(a, 1), (b, 2), and (c, 3)" in Section 2, are conceptual illustrations rather than actual datasets used for empirical evaluation.
Dataset Splits No The paper is purely theoretical and does not conduct empirical experiments. As such, it does not discuss training, validation, or test dataset splits for reproducibility of experimental results.
Hardware Specification No The paper is purely theoretical and focuses on algorithm design and proofs. It does not report on computational experiments and, therefore, does not provide any hardware specifications.
Software Dependencies No The paper is purely theoretical and does not report on computational implementations or experiments. Therefore, it does not list any specific software dependencies with version numbers.
Experiment Setup No The paper is purely theoretical and does not describe any empirical experiments or their setup. Thus, it does not provide specific hyperparameter values, training configurations, or system-level settings.