Characterizing the Evasion Attackability of Multi-label Classifiers

Authors: Zhuo Yang, Yufei Han, Xiangliang Zhang10647-10655

AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Substantial experimental results on real-world datasets validate the unveiled attackability factors and the effectiveness of the proposed empirical attackability indicator. Experiments In the experimental study, we aim at 1) validating the theoretical attackability analysis in Theorem 1 and 2; and 2) evaluating the empirical attackability indictor estimated by GASE for targeted classifiers.
Researcher Affiliation Collaboration 1 King Abdullah University of Science and Technology, Thuwal, Saudi Arabia 2 Norton Research Group, Sophia Antipolis, France
Pseudocode Yes Algorithm 1: Greedy Attack Space Expansion
Open Source Code No The paper mentions using a third-party tool ('adversarial-robustness-toolbox') but does not state that the code for their own methodology is open-source or provide a link to it.
Open Datasets Yes We include 4 datasets collected from various real-world multi-label applications, cyber security practices (Creepware), biology research (Genbase) (Tsoumakas, Katakis, and Vlahavas 2010), object recognition (VOC2012) (Everingham et al. 2012) and environment research (Planet) (Kaggle 2017).
Dataset Splits Yes On each data set, we randomly choose 50%, 30% and 20% data instances for training, validation and testing to build the targeted multi-label classifier.
Hardware Specification No The paper does not provide specific details about the hardware (e.g., GPU/CPU models, memory) used for running its experiments.
Software Dependencies No The paper mentions 'adversarial-robustness-toolbox (Nicolae et al. 2018)' but does not specify its version number or any other software dependencies with specific versions.
Experiment Setup No The paper mentions that 'The decision threshold ti in Algorithm.1 is set to 0' and discusses complexity control settings like 'lmd λ and adt, where λ is the regularization parameter', but it does not provide concrete hyperparameter values (e.g., specific λ values, learning rates, batch sizes, epochs) or detailed training configurations required for full reproducibility.