DOFEN: Deep Oblivious Forest ENsemble

Authors: KuanYu Chen, Ping-Han Chiang, Hsin-Rung Chou, Chih-Sheng Chen, Tien-Hao Chang

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental DOFEN surpasses other DNNs on tabular data, achieving state-of-the-art performance on the well-recognized benchmark: Tabular Benchmark [1], which includes 73 total datasets spanning a wide array of domains. ... To evaluate DOFEN comprehensively and objectively, we have chosen a recent and well-recognized benchmark: the Tabular Benchmark [1]. ... Section 4 Experiments
Researcher Affiliation Collaboration Kuan-Yu Chen Sinopac Holdings lavamore@sinopac.com Ping-Han Chiang Sinopac Holdings u10000129@gmail.com Hsin-Rung Chou Sinopac Holdings sherry.chou@sinopac.com Chih-Sheng Chen Sinopac Holdings sheng77@sinopac.com Darby Tien-Hao Chang Sinopac Holdings National Cheng Kung University darby@sinopac.com
Pseudocode Yes Algorithm 1: Two-level Relaxed ODT Ensemble
Open Source Code Yes The code of DOFEN is available at: https: //github.com/Sinopac-Digital-Technology-Division/DOFEN.
Open Datasets Yes We strictly follow the protocols of the Tabular Benchmark as detailed in its official implementation1. ... For full details, please refer to the original paper [1]. The Tabular Benchmark categorized datasets into classification and regression... Footnote 1: https://github.com/Leo Grin/tabular-benchmark. Appendix B.3 Mappings of Open ML Task ID and Dataset Name
Dataset Splits Yes We strictly follow the protocols of the Tabular Benchmark as detailed in its official implementation1. This includes dataset splits, preprocessing methods, hyperparameter search guidelines, and evaluation metrics.
Hardware Specification Yes The experiments involving DNN-based models were performed using an NVIDIA Ge Force RTX 2080 Ti, while those for the GBDT-based models utilized an AMD EPYC 7742 64-core Processor with 16 threads. ... This experiment was conducted using a single NVIDIA Tesla V100 GPU. ... Appendix H.1: GPUs: NVIDIA Ge Force RTX 2080 Ti, NVIDIA DGX1, NVIDIA A100. CPUs: Intel(R) Xeon(R) Silver 4210 CPU, Intel(R) Xeon(R) CPU E5-2698 v4, AMD EPYC605 7742 64-core Processor
Software Dependencies No DOFEN is implemented in Pytorch [31]. (no specific version number is provided for PyTorch or other mentioned libraries like Light GBM, Cat Boost, XGBoost used in the benchmark).
Experiment Setup Yes For hyperparameters used in model optimization (e.g. optimizer, learning rate, weight decay, etc.), all experiments share the same settings. Specifically, DOFEN uses Adam W optimizer [32] with 1e 3 learning rate and no weight decay. The batch size is set to 256, and DOFEN is trained for 500 epochs without using learning rate scheduling or early stopping. ... Table 2: The default hyperparameters of DOFEN.