Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Learning to Boost Resilience of Complex Networks via Neural Edge Rewiring
Authors: Shanchao Yang, MA KAILI, Baoxiang Wang, Tianshu Yu, Hongyuan Zha
TMLR 2023 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In this section, we demonstrate the advantages of Resi Net over existing non-learning-based and learningbased methods in achieving superior network resilience, inductively generalizing to unseen graphs, and accommodating multiple resilience and utility metrics. Moreover, we show that Fire GNN can learn meaningful representations from graph data without rich features, while current GNNs fail. Our implementation is available at https: // github. com/ yangysc/ Resi Net . |
| Researcher Affiliation | Academia | Shanchao Yang EMAIL School of Data Science The Chinese University of Hong Kong, Shenzhen Kaili Ma EMAIL Department of Computer Science and Engineering The Chinese University of Hong Kong Baoxiang Wang EMAIL School of Data Science The Chinese University of Hong Kong, Shenzhen Tianshu Yu EMAIL School of Data Science The Chinese University of Hong Kong, Shenzhen Hongyuan Zha EMAIL School of Data Science The Chinese University of Hong Kong, Shenzhen Shenzhen Institute of Artificial Intelligence and Robotics for Society |
| Pseudocode | No | The paper describes the methodology using text and diagrams (e.g., Figure 3: Overview of the architecture of Resi Net, Figure 4: Filtration Process in Fire GNN), but does not include explicit pseudocode or algorithm blocks. |
| Open Source Code | Yes | Our implementation is available at https: // github. com/ yangysc/ Resi Net . |
| Open Datasets | Yes | Synthetic and real datasets including EU power network (Zhou & Bialek, 2005) and Internet peer-to-peer networks (Leskovec et al., 2007; Ripeanu et al., 2002) are used to demonstrate the performance of Resi Net in transductive and inductive settings. The details of data generation and statistics of the datasets are presented in Appendix B.1. |
| Dataset Splits | Yes | We first randomly generate the fixed number of BA networks as the training data to train Resi Net and then evaluate Resi Net s performance directly on the test dataset without any additional optimization. Table 3: Statistics of graphs used for resilience maximization. ... BA-10-30 ( ) 10-30 112 25088 1000/500 Inductive BA-20-200 ( ) 20-200 792 1254528 4500/360 Inductive |
| Hardware Specification | Yes | We run all experiments for Resi Net on the platform with two GEFORCE RTX 3090 GPU and one AMD 3990X CPU. |
| Software Dependencies | No | The paper mentions using a 5-layer defined GIN (Xu et al., 2019) as the backbone and the proximal policy optimization (PPO) algorithm (Schulman et al., 2017) for training, but it does not specify concrete software library versions (e.g., Python 3.8, PyTorch 1.9). |
| Experiment Setup | Yes | The hidden dimensions for node embedding and graph embedding in each hidden layer are set to 64 and the Se LU activation function is used after each message passing propagate. Graph normalization strategy is adopted to stabilize the training of GNN (Cai et al., 2021). The jumping knowledge network (Xu et al., 2018) is used to aggregate node features from different layers of the GNN. The overall policy is trained by using the highly tuned implementation of proximal policy optimization (PPO) algorithm (Schulman et al., 2017). Several critical strategies for stabilizing and accelerating the training of Resi Net are used, including advantage normalization (Andrychowicz et al., 2021), the dual-clip PPO (the dual clip parameter is set to 10) (Ye et al., 2020), and the usage of different optimizers for policy network and value network. Additionally, since the step-wise reward range is small (around 0.01), we scale the reward by a factor of 10 to facilitate the training of Resi Net. The policy head model and value function model use two separated Fire GNN encoder networks with the same architecture. |