Online Agnostic Multiclass Boosting

Authors: Vinod Raman, Ambuj Tewari

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Finally, we give empirical results showcasing that our OCO-based boosting algorithms are fast and competitive with existing state-of-the-art multiclass boosting algorithms. We performed experiments with Algorithm 1 on seven UCI datasets [13].
Researcher Affiliation Academia Vinod Raman Department of Statistics University of Michigan Ann Arbor, MI 48104 vkraman@umich.edu Ambuj Tewari Department of Statistics University of Michigan Ann Arbor, MI 48018 tewaria@umich.edu
Pseudocode Yes Pseudocode for our online agnostic boosting algorithm is provided in Algorithm 1.
Open Source Code Yes All code is available at https://github.com/ vinodkraman/Online Agnostic Multiclass Boosting.
Open Datasets Yes We performed experiments with Algorithm 1 on seven UCI datasets [13]. [13] C. L. Du Bois. UCI network data repository, 2008. URL http://networkdata.ics.uci.
Dataset Splits No The paper mentions using "five independent shuffles of each dataset" and that "γ was tuned separately for each respective cell of Table 1", which implies some form of data splitting and validation. However, it does not explicitly specify the percentages or sample counts for train/validation/test splits in the main text.
Hardware Specification No The paper states that information on "the total amount of compute and the type of resources used" is in Appendix F, which is not provided. The main text does not specify any particular GPU models, CPU models, or other hardware details.
Software Dependencies No The paper states "For weak learners, we used the implementation of the Very Fast Decision Tree from the River package [29]", but does not provide specific version numbers for the River package or any other software dependencies like Python.
Experiment Setup Yes For weak learners, we used the implementation of the Very Fast Decision Tree from the River package [29] and restricted the maximum depth of the tree to 1. We used Projected OGD [35] for the OCO algorithm and set the number of weak learners, N, to 100 for each boosting algorithm. Thus, γ was tuned separately for each respective cell of Table 1.