(De-)Randomized Smoothing for Decision Stump Ensembles

Authors: Miklós Horváth, Mark Müller, Marc Fischer, Martin Vechev

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental An extensive experimental evaluation on computer vision and tabular data tasks shows that our approach yields significantly higher certified accuracies than the state-of-the-art for tree-based models.An extensive empirical evaluation, demonstrating the effectiveness of our approach and establishing a new state-of-the-art in a wide range of settings (Section 5).
Researcher Affiliation Academia Miklós Z. Horváth , Mark Niklas Müller , Marc Fischer, Martin Vechev Department of Computer Science ETH Zurich Switzerland mihorvat@ethz.ch, {mark.mueller,marc.fischer,martin.vechev}@inf.ethz.ch
Pseudocode Yes Algorithm 1 Stump Ensemble PDF computation via Dynamic Programming function COMPUTEPDF({(Γ, v)i}d i=1, x, ϕ)
Open Source Code Yes An extensive experimental evaluation on computer vision and tabular data tasks shows that our approach yields significantly higher certified accuracies than the state-of-the-art for tree-based models. We release all code and trained models at https://github.com/eth-sri/drs.
Open Datasets Yes We compare to prior work on the DIABETES [36], BREASTCANCER [37], FMNIST-SHOES [38], MNIST 1 VS. 5 [39], and MNIST 2 VS. 6 [39] datasets and are the first to provide joint certificates on a set of new benchmarks (Section 5.2). Finally, we perform an ablation study, investigating the effect of DRS s key components (Section 5.3).All datasets we use are publicly available.
Dataset Splits Yes Did you specify all the training details (e.g., data splits, hyperparameters, how they were chosen)? [Yes] We provide full training details in App. B.However, in App. C we report numerous error bars with respect to the data split via 5-fold cross-validation.
Hardware Specification Yes We implement our approach in Py Torch [35] and evaluate it on Intel Xeon Gold 6242 CPUs and an NVIDIA RTX 2080Ti.
Software Dependencies No We implement our approach in Py Torch [35]. It mentions PyTorch, but does not provide a specific version number for PyTorch or any other software dependency.
Experiment Setup Yes Did you specify all the training details (e.g., data splits, hyperparameters, how they were chosen)? [Yes] We provide full training details in App. B.