Logarithmic Time Online Multiclass prediction

Authors: Anna E. Choromanska, John Langford

NeurIPS 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental 4 Experiments We address several hypotheses experimentally. ... In Table 2 and 3 we report respectively train time and per-example test time ... Table 4: Test error (%) and confidence interval on all problems.
Researcher Affiliation Collaboration Anna Choromanska Courant Institute of Mathematical Sciences New York, NY, USA achoroma@cims.nyu.edu John Langford Microsoft Research New York, NY, USA jcl@microsoft.com
Pseudocode Yes Algorithm 1 LOMtree algorithm (online tree training)
Open Source Code No The paper states 'All methods were implemented in the Vowpal Wabbit [25] learning system', referring to an external open-source project. However, it does not explicitly state that the LOMtree algorithm's specific implementation described in this paper is open-source or provide a link to its code.
Open Datasets Yes We conducted experiments on a variety of benchmark multiclass datasets: Isolet, Sector, Aloi, Image Net (Im Net) and ODP13. The details of the datasets are provided in Table 1. ... [27] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei. Imagenet: A large-scale hierarchical image database. In CVPR, 2009.
Dataset Splits Yes The datasets were divided into training (90%) and testing (10%). Furthermore, 10% of the training dataset was used as a validation set.
Hardware Specification No The paper does not provide any specific hardware details (e.g., CPU/GPU models, memory specifications, or cloud instances) used for running the experiments.
Software Dependencies No The paper states 'All methods were implemented in the Vowpal Wabbit [25] learning system', but does not provide specific version numbers for Vowpal Wabbit or any other software dependencies.
Experiment Setup Yes The regressors in the tree nodes for LOMtree, Rtree, and Filter tree as well as the OAA regressors were trained by online gradient descent for which we explored step sizes chosen from the set {0.25, 0.5, 0.75, 1, 2, 4, 8}. We used linear regressors. For each method we investigated training with up to 20 passes through the data and we selected the best setting of the parameters (step size and number of passes) as the one minimizing the validation error. Additionally, for the LOMtree we investigated different settings of the stopping criterion for the tree expansion: T = {k 1, 2k 1, 4k 1, 8k 1, 16k 1, 32k 1, 64k 1}, and swap resistance RS = {4, 8, 16, 32, 64, 128, 256}.