Rethinking Graph Lottery Tickets: Graph Sparsity Matters
Authors: Bo Hui, Da Yan, Xiaolong Ma, Wei-Shinn Ku
ICLR 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments were conducted which demonstrate the superiority of our proposed sparsification method over UGS, and which empirically verified our transferable GLT hypothesis. |
| Researcher Affiliation | Academia | Bo Hui1, Da Yan2, Xiaolong Ma3, Wei-Shinn Ku1 1 Auburn University 2 The University of Alabama at Birmingham 3 Clemson University |
| Pseudocode | Yes | Algorithm 1 Iterative pruning process |
| Open Source Code | No | The paper does not provide an explicit statement or link to its source code. |
| Open Datasets | Yes | We evaluate our sparsification method with three popular GNN models: GCN (Kipf & Welling, 2017), GAT (Velickovic et al., 2018) and GIN (Xu et al., 2019), on three widely used graph datasets from Chen et al. (2021b) (Cora, Citeseer and Pub Med) and two OGB datasets from Hu et al. (2020) (Arxiv and MAG) for semi-supervised node classification. |
| Dataset Splits | Yes | Table 3: Statistics of datasets ... Split ratio ... Cora 120/500/1000 ... Citeseer 140/500/1000 ... Pub Med 60/500/1000 ... ar Xiv 54%18%/28% ... MAG 85%/9%/6% |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU/CPU models, memory) used for running experiments. |
| Software Dependencies | No | The paper does not specify version numbers for key software dependencies like Python, PyTorch, or other libraries. |
| Experiment Setup | Yes | For fair comparison, we follow UGS to use the default setting: pg = 5 and pθ = 20 unless otherwise stated. The value of λ is configured as 0.1 by default. ... In our sparsification process, the value of η1, η2 and α are configured as 1e-2, 1e-2 and 1e-1 by default, respectively. In each pruning round, the number of epochs to update masks is configured as the default number of epochs in the training process of the original GNN model. |