Adaptive Classification for Prediction Under a Budget

Authors: Feng Nan, Venkatesh Saligrama

NeurIPS 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental On a number of benchmark datasets our method outperforms state-of-the-art achieving higher accuracy for the same cost. We test various aspects of our algorithms and compare with stateof-the-art feature-budgeted algorithms on five real world benchmark datasets: Letters, Mini Boo NE Particle Identification, Forest Covertype datasets from the UCI repository [6], CIFAR-10 [11] and Yahoo! Learning to Rank[4].
Researcher Affiliation Academia Feng Nan Systems Engineering Boston University Boston, MA 02215 fnan@bu.edu Venkatesh Saligrama Electrical Engineering Boston University Boston, MA 02215 srv@bu.edu
Pseudocode Yes Algorithm 1 ADAPT-LIN and Algorithm 2 ADAPT-GBRT
Open Source Code No The paper does not contain any statement about releasing open-source code or provide a link to a code repository.
Open Datasets Yes Letters, Mini Boo NE Particle Identification, Forest Covertype datasets from the UCI repository [6], CIFAR-10 [11] and Yahoo! Learning to Rank[4].
Dataset Splits Yes Table 1: Dataset Statistics Dataset #Train #Validation #Test #Features Feature Costs
Hardware Specification No The paper does not explicitly state the specific hardware (e.g., GPU models, CPU models, or cloud instance types) used to run its experiments.
Software Dependencies No The paper mentions software components and algorithms like 'logistic regression', 'gradient boosted trees', 'CART [2]', and 'RBF-SVM', but it does not specify version numbers for any of these software dependencies.
Experiment Setup No Detailed experiment setups can be found in the Suppl. Material.