Adversarial Attacks on Graph Classifiers via Bayesian Optimisation

Authors: Xingchen Wan, Henry Kenlay, Robin Ru, Arno Blaas, Michael A Osborne, Xiaowen Dong

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We empirically validate the effectiveness and flexibility of the proposed method on a wide range of graph classification tasks involving varying graph properties, constraints and modes of attack.
Researcher Affiliation Academia Xingchen Wan Henry Kenlay Binxin Ru Arno Blaas Michael A. Osborne Xiaowen Dong Machine Learning Research Group, University of Oxford, Oxford, UK {xwan,kenlay,robin,arno,mosb,xdong}@robots.ox.ac.uk
Pseudocode Yes The overall routine of our proposed GRABNEL is presented in Fig 1 (and in pseudo-code form in App A)
Open Source Code Yes An open-source implementation is available at https://github.com/xingchenwan/grabnel.
Open Datasets Yes We first conduct experiments on four common TU datasets [25], namely (in ascending order of average graph sizes in the dataset) IMDB-M, PROTEINS, COLLAB and REDDIT-MULTI-5K. ... We use a 80-10-10 train-validation-test split (with a fixed random seed 0 for all dataset splits) for all TU datasets, as is standard practice [10, 23].
Dataset Splits Yes We use a 80-10-10 train-validation-test split (with a fixed random seed 0 for all dataset splits) for all TU datasets, as is standard practice [10, 23].
Hardware Specification No The authors acknowledge the Oxford-Man Institute of Quantitative Finance for providing computing resources but do not specify any particular hardware components like CPU or GPU models.
Software Dependencies No The paper mentions various models and libraries (e.g., GCN, GIN, Deep Graph Library) but does not provide specific version numbers for any software dependencies.
Experiment Setup Yes For GRABNEL, we set the initial acquisition population size to 32 and the evolution population size to 64. The total number of GA iterations is 100. We also use 2 iterations for the WL feature extractor on all graphs, except for the Twitter dataset, where we use 3 iterations... The attack budget r = 0.03 for all experiments and B = 40...