Mitigating Emergent Robustness Degradation while Scaling Graph Learning
Authors: Xiangchi Yuan, Chunhui Zhang, Yijun Tian, Yanfang Ye, Chuxu Zhang
ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Comprehensive experiments have been conducted to demonstrate the anti-degraded robustness and scalability of our method, as compared to popular graph adversarial learning methods, under diverse attack intensities and various datasets of different sizes. |
| Researcher Affiliation | Academia | 1Brandeis University, {xiangchiyuan,chuxuzhang}@brandeis.edu 2Dartmouth College, {chunhui.zhang.gr}@dartmouth.edu 3University of Notre Dame, {yijun.tian,yye7}@nd.edu |
| Pseudocode | No | The paper describes the method using figures and text descriptions, but does not include structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | The code can be accessed through https://github.com/chunhuizng/emergent-degradation. |
| Open Datasets | Yes | We evaluate the robustness and scalability of our proposed DRAGON framework using the Graph Robustness Benchmark (GRB) dataset (Zheng et al., 2021), which includes graphs of varying scales, such as grb-cora (small-scale), grb-citeseer (small-scale), grb-flickr (medium-scale), grb-reddit (large-scale), and grb-aminer (large-scale). |
| Dataset Splits | Yes | Additionally, we adhere to the GRB benchmark s data splitting protocol, with 60% of the graph data as the training set, 10% as the validation set, and 30% as the test set for each benchmark dataset. |
| Hardware Specification | Yes | All experiments are performed on an NVIDIA V100 GPU with 32 GB of memory. |
| Software Dependencies | No | The paper refers to using GCN as a surrogate model and adhering to GRB benchmark configurations for baselines, but it does not specify software dependencies with version numbers (e.g., Python, PyTorch/TensorFlow versions, or specific library versions). |
| Experiment Setup | Yes | The hyperparameters of DPMo E are given in Table 5, including hyperparameters of adversarial training listed in Table 3 Specifically, for Cora, we set the total number of experts to N = 10 and the number of activated experts to k = 2. For other datasets, by default, we set the total number of experts to N = 4 and the number of activated experts to k = 1. Additionally, hyperparameters of DMGAN are shown in Table 10. The mask rate is 0.7, the walks per node is 1, and the walk length is 3. |