Graph Contrastive Backdoor Attacks

Authors: Hangfan Zhang, Jinghui Chen, Lu Lin, Jinyuan Jia, Dinghao Wu

ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental By extensively evaluating GCBA on multiple datasets and GCL methods, we show that our attack can achieve high attack success rates while preserving stealthiness.
Researcher Affiliation Academia 1Pennsylvania State University. Correspondence to: Dinghao Wu <dinghao@psu.edu>.
Pseudocode Yes Algorithm 1 in Appendix describes the flow of GCBA-poisoning attack. Algorithm 2 in Appendix illustrates the GCBA-crafting attack. Algorithm 3 in Appendix concludes the GCBA-natural-backdoor attack.
Open Source Code No The paper does not provide any explicit statements or links indicating that its source code is open or publicly available.
Open Datasets Yes We evaluate GCBA on five commonly used datasets: Cora, Cite Seer(Kipf & Welling, 2016), DBLP(Fu et al., 2020), Blog Catalog, and Flickr(Meng et al., 2019).
Dataset Splits No To train the downstream classifier, we used half of the nodes from the downstream dataset as the downstream training set and used the remaining nodes as the testing set. While training and testing splits are mentioned, a separate validation split is not explicitly described.
Hardware Specification No The paper does not provide specific hardware details such as GPU/CPU models or memory specifications used for running the experiments.
Software Dependencies No We use Py Torch (Paszke et al., 2019) as the deep learning framework for implementations. We adopt Adam W (Loshchilov & Hutter, 2017) to optimize the trigger node attribute δ with a learning rate of 0.0015. While these software components are mentioned, explicit version numbers (e.g., PyTorch 1.9) are not provided, only citations to their original papers.
Experiment Setup Yes Parameter Settings We list parameter settings for each type of adversary as follows. See appendix C.3 for more experimental setting details. In C.3, details like learning rate of 0.0015 and 1000 epochs for downstream classifiers are given.