Average Sensitivity of Decision Tree Learning

Authors: Satoshi Hara, Yuichi Yoshida

ICLR 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental 7 EXPERIMENTS We demonstrate that the proposed algorithm can output stable decision trees. Datasets We used datasets shown in Table 1.
Researcher Affiliation Academia Satoshi Hara Osaka University satohara@ar.sanken.osaka-u.ac.jp Yuichi Yoshida National Institute of Informatics yyoshida@nii.ac.jp
Pseudocode Yes Algorithm 1: Procedure PREDICT; Algorithm 2: Procedure DISTANCE; Algorithm 3:; Algorithm 4:
Open Source Code Yes The code is available at https://github.com/sato9hara/Stable Decision Tree
Open Datasets Yes These datasets are obtained from https://github.com/chenhongge/RobustTrees
Dataset Splits No For training, we randomly sampled 80% of the data points and 1000 data points for small and large datasets, respectively. In the experiments, we evaluated the test accuracy of the learned decision trees using the entire test data. While cross-validation is mentioned for tree depth selection, explicit validation dataset splits are not provided.
Hardware Specification No No specific hardware details (like CPU/GPU models, memory) are mentioned in the paper.
Software Dependencies No We implemented both the greedy and proposed algorithms in Python 3 using the JIT compiler of Numba. Python 3 is not specific enough (e.g., 3.x), and no version is given for Numba.
Experiment Setup Yes We set the tree depth shown in Table 1 so that the greedy algorithm exhibits the highest accuracy in cross-validation." and "we set Q = 500."