Robust Decision Trees Against Adversarial Examples

Authors: Hongge Chen, Huan Zhang, Duane Boning, Cho-Jui Hsieh

ICML 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results on real world datasets demonstrate that the proposed algorithms can substantially improve the robustness of tree-based models against adversarial examples.
Researcher Affiliation Academia 1MIT, Cambridge, MA 02139, USA 2UCLA, Los Angeles, CA 90095, USA.
Pseudocode Yes Algorithm 1 Robust Split with Information Gain; Algorithm 2 Robust Split for Boosted Tree
Open Source Code Yes Our code is at https://github.com/chenhongge/RobustTrees.
Open Datasets Yes We present results on three small datasets... We consider nine real world large or medium sized datasets and two small datasets (Chang & Lin, 2011), spanning a variety of data types (including both tabular and image data). [...] Figure 1. MNIST and Fashion-MNIST examples... [...] Table 2. Test accuracy and robustness of information gain based single decision tree model. [...] Table 3. The test accuracy and robustness of GBDT models.
Dataset Splits Yes In Table 2, we present the average ℓ∞ distortion of the adversarial examples of both classical natural decision trees and our robust decision trees trained on different datasets. [...] In Table 3, we present the average ℓ∞ distortion of adversarial examples found by Cheng’s ℓ∞ attack for both natural GBDT and robust GBDT models trained on those datasets.
Hardware Specification No The paper does not explicitly mention any specific hardware (e.g., CPU, GPU models, or memory specifications) used for running the experiments.
Software Dependencies No The paper mentions software like XGBoost, Light GBM, and Cat Boost, but does not provide specific version numbers for any of these dependencies, which is required for reproducibility.
Experiment Setup Yes We use the same number of trees, depth and step size shrinkage as in Kantchelian et al. (2016) to train our robust and natural models. [...] Table 2 and 3 list 'depth' for the models, and Table 3 lists 'ϵ' as a robust training hyper-parameter.