Feature-Cost Sensitive Learning with Submodular Trees of Classifiers
Authors: Matt Kusner, Wenlin Chen, Quan Zhou, Zhixiang (Eddie) Xu, Kilian Weinberger, Yixin Chen
AAAI 2014 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In this section, we evaluate our approach on a real-world feature-cost sensitive ranking dataset: the Yahoo! Learning to Rank Challenge dataset. We begin by describing the dataset and show Precision@5 per cost compared against CSTC (Xu et al. 2014) and another cost-sensitive baseline. We then present results on a diverse set of non-cost sensitive datasets, demonstrating the flexibility of our approach. For all datasets we evaluate the training times of our approach compared to CSTC for varying tree budgets. |
| Researcher Affiliation | Academia | Washington University in St. Louis, 1 Brookings Drive, MO 63130 Tsinghua University, Beijing 100084, China {mkusner, wenlinchen, zhixiang.xu, kilian, ychen25}@wustl.edu zhouq10@mails.tsinghua.edu.cn |
| Pseudocode | Yes | Algorithm 1 ASTC in pseudo-code. |
| Open Source Code | No | The paper does not provide any statement or link indicating the release of open-source code for the described methodology. |
| Open Datasets | Yes | We evaluate our approach on a real-world feature-cost sensitive ranking dataset: the Yahoo! Learning to Rank Challenge dataset (Chen et al. 2012). |
| Dataset Splits | No | The paper mentions 'hyperparameter tuning on a validation set' but does not provide specific details on the dataset splits (e.g., percentages or counts) for training, validation, or test sets. |
| Hardware Specification | No | The paper mentions 'Computations were performed via the Washington University Center for High Performance Computing', but does not provide specific hardware details such as CPU/GPU models, memory, or other specifications. |
| Software Dependencies | No | The paper does not provide specific software dependencies or version numbers for its implementation. |
| Experiment Setup | Yes | For both algorithms we set a maximum tree depth of 5. and We set a new-feature budget B identical for each node in the tree and then greedily select new features up to cost B for each node. and Finally, we set node thresholds θk to send half of the training inputs to each child node. |