Searching Lottery Tickets in Graph Neural Networks: A Dual Perspective

Authors: Kun Wang, Yuxuan Liang, Pengkun Wang, Xu Wang, Pengfei Gu, Junfeng Fang, Yang Wang

ICLR 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experimental results on various graph-related tasks validate the effectiveness of our framework. 4 EXPERIMENTS In this section, we conduct extensive experiments to answer the following research questions:
Researcher Affiliation Academia 1 Key Laboratory of Precision and Intelligent Chemistry, University of Science and Technology of China (USTC), Hefei, China 2School of Software Engineering, USTC. 3School of Data Science, USTC. 4National University of Singapore, Singapore {wk520529, pengkun, wx309, fjf, gpf9061}@mail.ustc.edu.cn, angyan@ustc.edu.cn , yuxliang@outlook.com
Pseudocode Yes Algorithm 1 Dual Graph Lottery Tickets (DGLT) Algorithm (aligned with Fig. 2)
Open Source Code No The paper does not provide a direct link to a code repository or explicitly state that the source code for their methodology is being released or made publicly available.
Open Datasets Yes Datasets. Six benchmarks for GNN evaluation are employed in this paper to verify the effectiveness of our DGLT. To be specific, we choose three popular graph-based datasets, including Core, Citeseer and Pub Med Kipf & Welling (2016) for node classification and link prediction. To test the scalability of DGLT, we further use a large-scale dataset called Ogbl-Collab Hu et al. (2020) for link prediction. Finally, we examine our algorithm for graph classification on D&D Dobson & Doig (2003) and ENZYMES Borgwardt et al. (2005).
Dataset Splits Yes Train-val-test Splitting of Datasets. As for node classification task of regular-size datasets, we follow the same data split criteria among different backbones, i.e., 700 (Cora), 420 (Citeseer) and 460 (Pub Med) labeled data for training, 500 nodes for validation and 500 nodes for testing. As for link prediction, we shuffle the datasets and sample 85% edges for training, 10% for validation, 5% for testing, respectively. For Ogbl-Collab, in order to simulate a real collaborative recommendation application, we take the cooperation before 2017 as the training edge, the cooperation in 2018 as the validation edge and the cooperation in 2019 as the testing edge. For graph classification task, we choose D&D and ENZYMES datasets. ... We perform 10-fold cross-validation to observe model performance and reported the accuracy averaged over 10-fold.
Hardware Specification Yes Computing Infrastructures: three NIVIDIA Tesla v100 (16GB GPU) Software Framework: Pytorch
Software Dependencies No The paper mentions "Software Framework: Pytorch" but does not specify a version number for PyTorch or any other software dependencies, which is necessary for reproducibility.
Experiment Setup Yes Further, we place the training details and hyper-parameter configuration in Table 6. Table 6: Training details and hyper-parameter configuration. ξ(0) and ρ(0) indicate the starting value of the graph regularization and weight regularization, respectively. ξa and ρa indicate the size of the graph regularization and weight regularization increase value. ... Task, Dataset, Epochs (pre-train/fine-tune), ξ(0), ξa, ρ(0), ρa, Optimizer, learning rate