Logarithmic Time One-Against-Some

Authors: Hal Daumé III, Nikos Karampatziakis, John Langford, Paul Mineiro

ICML 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental 4. Empirical Results We study several questions empirically. ... Throughout this section we conduct experiments using learning with a linear representation.
Researcher Affiliation Collaboration 1University of Maryland 2Microsoft. Correspondence to: Paul Mineiro <pmineiro@microsoft.com>.
Pseudocode Yes Algorithm 1 Predict. ... Algorithm 2 Train. ... Algorithm 3 update router. ... Algorithm 4 update regressors
Open Source Code Yes Implementations of the learning algorithms, and scripts to reproduce the data sets and experimental results, are available on github (Mineiro, 2017). ... Mineiro, Paul. Recall tree demo, 2017. URL https: //github.com/John Langford/vowpal_ wabbit/tree/master/demo/recall_tree.
Open Datasets Yes Table 1. Datasets used for experimentation. Dataset Source Task Classes Examples ALOI Geusebroek et al. (2005) Imagenet Oquab et al. (2014) LTCB Mahoney (2009) ODP Bennett & Nguyen (2009)
Dataset Splits No The paper mentions 'progressive validation loss' but does not provide explicit training/test/validation dataset splits for all experiments or a general splitting methodology for reproducibility.
Hardware Specification No The paper mentions 'GPUs' and '24 cores in parallel' but does not specify exact models of GPUs, CPUs, or other detailed hardware specifications for the experiments.
Software Dependencies No The paper mentions Vowpal Wabbit in the GitHub link but does not provide specific version numbers for software dependencies or other libraries used in the experiments.
Experiment Setup Yes Here, λ is a hyperparameter of the recall tree (in fact, it is the only additional hyperparameter), which controls how aggressively the tree branches." and "When F = O(log K) this does not compromise the goal of achieving logarithmic time classification." and "To test this we trained on the LTCB dataset with a multiplier on the bound of either 0 (i.e. just using empirical recall directly) or 1.