Towards Reasonable Budget Allocation in Untargeted Graph Structure Attacks via Gradient Debias
Authors: Zihan Liu, Yun Luo, Lirong Wu, Zicheng Liu, Stan Z. Li
NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We evaluate DEBIAS on three benchmark datasets against 7 baselines for untargeted graph structure attacks. The experimental results show that DEBIAS consistently outperforms baselines on both clean and poisoned graphs. |
| Researcher Affiliation | Academia | Department of Electrical and Computer Engineering, University of California, San Diego, La Jolla, CA, USA, 92093 |
| Pseudocode | Yes | Algorithm 1: Gradient Debiasing for Untargeted Graph Attacks |
| Open Source Code | No | The paper does not provide an explicit statement about releasing code or a link to a code repository for the methodology. |
| Open Datasets | Yes | Datasets We evaluate the effectiveness of DEBIAS on three widely-used benchmark datasets in graph machine learning: Cora, CiteSeer, and PubMed [19]. |
| Dataset Splits | Yes | For datasets Cora, CiteSeer, and PubMed, we use the commonly used 20/30/50 training/validation/testing split. |
| Hardware Specification | No | The paper does not explicitly describe the hardware used to run its experiments. |
| Software Dependencies | No | The paper does not provide specific ancillary software details with version numbers (e.g., library or solver names with version numbers like Python 3.8, CPLEX 12.4) needed to replicate the experiment. |
| Experiment Setup | Yes | For all experiments, we use a two-layer GCN model [19] with hidden dimension 16 and a learning rate of 0.01. We use Adam optimizer [20] and train the GCN model for 200 epochs. To mitigate overfitting, we apply a dropout ratio of 0.5 for the GCN model. For datasets Cora, CiteSeer, and PubMed, we use the commonly used 20/30/50 training/validation/testing split. |