Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
ThunderGBM: Fast GBDTs and Random Forests on GPUs
Authors: Zeyi Wen, Hanfeng Liu, Jiashuai Shi, Qinbin Li, Bingsheng He, Jian Chen
JMLR 2020 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our experimental results show that Thunder GBM outperforms the existing libraries while producing similar models, and can handle high dimensional problems where existing GPU-based libraries fail. Documentation, examples, and more details about Thunder GBM are available at https://github.com/xtra-computing/thundergbm. Keywords: Gradient Boosting Decision Trees, Random Forests, GPUs, Efficiency |
| Researcher Affiliation | Academia | Dept. of Computer Science and Software Engineering, Uni. of Western Australia, 6009, Australia School of Computing, National University of Singapore, 117418, Singapore School of Software Engineering, South China University of Technology, Guangzhou, 510006, China |
| Pseudocode | No | The paper includes an overview diagram (Figure 1) and describes the algorithms verbally, but no structured pseudocode or algorithm blocks are explicitly provided. |
| Open Source Code | Yes | This article presents an efficient and open source software toolkit called Thunder GBM which exploits the high-performance Graphics Processing Units (GPUs) for GBDTs and RFs. ... Documentation, examples, and more details about Thunder GBM are available at https://github.com/xtra-computing/thundergbm. |
| Open Datasets | Yes | Five data sets are used here, and regression, classification and ranking are marked with reg , clf and rnk , respectively. ... cifar10 (clf) |
| Dataset Splits | No | The paper mentions that 'Five data sets are used here', but it does not specify any training/test/validation splits for these datasets. It refers to a supplementary file for more details ('More results on experiments and descriptions about the data sets can be found in the supplementary file (Wen et al., 2019a).'). |
| Hardware Specification | Yes | We conducted experiments on a Linux workstation with two Xeon E5-2640 v4 10 core CPUs, 256GB memory and a Tesla P100 GPU of 12GB memory. |
| Software Dependencies | No | We used the versions of XGBoost, Light GBM and Cat Boost on 21 Jul 2019. The paper mentions the date of the versions used for comparison but does not provide specific version numbers for Thunder GBM's own dependencies or other software components. |
| Experiment Setup | Yes | The tree depth is set to 6 and the number of trees is 40. |