Fast Provably Robust Decision Trees and Boosting

Authors: Jun-Qi Guo, Ming-Zhuo Teng, Wei Gao, Zhi-Hua Zhou

ICML 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conduct extensive experiments to support our approaches; in particular, our approaches are superior to those unprovably robust methods, and achieve better or comparable performance to those provably robust methods yet with the smallest running time.
Researcher Affiliation Academia 1National Key Laboratory for Novel Software Technology, Nanjing University, Nanjing, China.
Pseudocode Yes Algorithm 1 Fast Provably Robust Decision Tree (FPRDT)
Open Source Code No The paper provides links to the code of *other* methods (e.g., RIGBT-h3, TREANT4, GROOT5, ROCT5, PRB tree6, RGBDT7), but does not explicitly state that the source code for the methodology described in *this* paper (FPRDT or PRAda Boost) is publicly available or provides a link to it.
Open Datasets Yes 1https://www.openml.org/ 2https://www.cs.toronto.edu/ kriz/cifar.html
Dataset Splits Yes The performances of the compared methods are evaluated by five trials of 5-fold cross validation, where test adversarial accuracies are obtained by averaging over these 25 runs, as summarized in Table 3.
Hardware Specification Yes experiments are performed with Python on nodes of a computational cluster with 20 CPUs (Intel Core i9-10900X 3.7GHz), running Ubuntu with 128GB main memory.
Software Dependencies No The paper mentions "Python" but does not specify a version number or list specific library names with their versions.
Experiment Setup Yes We take the maximum depth 4 for TREANT and ROCT due to high computational complexity, and do not restrict the depth for other methods. We set 10 as the minimum number of instances for a splitting leaf node, and each leaf node has at least 5 instances.