Weighted Oblique Decision Trees

Authors: Bin-Bin Yang, Song-Qing Shen, Wei Gao5621-5627

AAAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments show the effectiveness of the proposed algorithm. This section empirically evaluates our WODT method on extensive datasets. We begin with the experimental settings, and then make empirical comparisons of our WODT method with state-of-the-art algorithms of decision trees. We further investigate tree sizes based on the cardinality of leaf node, and show the comparisons of running time. We finally analyze the training and generalization performance with respect to different tree depths.
Researcher Affiliation Academia Bin-Bin Yang, Song-Qing Shen, Wei Gao National Key Laboratory for Novel Software Technology Nanjing University, Nanjing, 210023, China {yangbb, shensq, gaow}@lamda.nju.edu.cn
Pseudocode Yes Algorithm 1 Induce Subtree(S, D, d) of Weighted Oblique Decision Tree (WODT)
Open Source Code No The paper provides a link for other methods (OC1 and CART-LC) but not for the proposed WODT method.
Open Datasets Yes We conduct our experiments on twenty benchmark datasets, as summarized in Table 11. Most datasets have been wellstudied in previous studies on decision trees. The features have been scaled to [ 1, 1] for all datasets. 1http://www.ics.uci.edu/ mlearn/MLRepository.html
Dataset Splits Yes The performance of all methods are evaluated by 10 trials of 5-fold cross validation with different random seeds, and the performance is obtained by averaging over 50 runs.
Hardware Specification Yes All experiments are performed on a node of computational cluster with 16 CPUs (Intel Xeon Core 3.0GHz) running Red Hat Linux Enterprise 5 with 48GB main memory.
Software Dependencies No The paper mentions
Experiment Setup Yes For the CO2 method, we excute 2 trials of 5-cv to select regularization parameter ν {0.1, 1, 4, 10, 43, 100} and the learning rate η {0.003, 0.01, 0.03} as in the work (Norouzi et al. 2015). We take the default parameters as in the work of (Murthy, Kasif, and Salzberg 1994) for OC1, OC1r, CART-LC and CART-LCr. We also select information gain as the criterion for APDT, OC1, OC1r, CART-LC and CART-LCr. Our WODT method does not require additional hyper-parameter.