Feature-Budgeted Random Forest

Authors: Feng Nan, Joseph Wang, Venkatesh Saligrama

ICML 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Empirically, on a number of benchmark datasets we demonstrate competitive accuracy-cost curves against state-of-the-art prediction-time algorithms. and 3. Experiments
Researcher Affiliation Academia Feng Nan FNAN@BU.EDU Joseph Wang JOEWANG@BU.EDU Venkatesh Saligrama SRV@BU.EDU Boston University, 8 Saint Mary s Street, Boston, MA
Pseudocode Yes Algorithm 1 BUDGETRF 1: procedure BUDGETRF(F, B, C, ytr, Xtr, ytv, Xtv) ... Subroutine GREEDYTREE 8: procedure GREEDYTREE(F, C, y, X)
Open Source Code No The paper states, 'We use the code provided by the authors for Greedy Miser,' which refers to third-party code. There is no explicit statement or link indicating that the authors' own code for BUDGETRF is available.
Open Datasets Yes Yahoo! Learning to Rank: (Chapelle et al.) ... Mini Boo NE Particle Identification Data Set: (Frank & Asuncion) ... Forest Covertype Data Set: (Frank & Asuncion) ... CIFAR-10: (Krizhevsky, 2009)
Dataset Splits Yes There are 141, 397/146, 769/184, 968 examples in training/validation/test sets.
Hardware Specification No The paper does not provide any specific hardware details such as GPU or CPU models used for running its experiments.
Software Dependencies No The paper mentions 'Matlab s Tree Bagger' and 'scikit-learn package' but does not specify version numbers for these or any other software dependencies, making the software environment not reproducible.
Experiment Setup Yes We run BUDGETRF using the threshold α = 0 for the threshold-Pairs impurity function. ... For each α we build a forest of maximum 40 trees using BUDGETRF. ... The optimization of classifiers in line 12 of Algorithm 1 is approximated by randomly generating 80, 40 and 20 stumps if the number of examples exceeds 2000, 500 and less than 500, respectively and select the best among them.