Graph Lottery Ticket Automated
Authors: Guibin Zhang, Kun Wang, Wei Huang, Yanwei Yue, Yang Wang, Roger Zimmermann, Aojun Zhou, Dawei Cheng, Jin Zeng, Yuxuan Liang
ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments demonstrate that Ada GLT outperforms state-of-the-art competitors across multiple datasets of various scales and types, particularly in scenarios involving deep GNNs. |
| Researcher Affiliation | Academia | 1Tongji University 2The Hong Kong University of Science and Technology (Guangzhou) 3University of Science and Technology of China (USTC) 4RIKEN AIP 5National University of Singapore 6The Chinese University of Hong Kong |
| Pseudocode | Yes | Algorithm 1: Algorithm workflow of Ada GLT |
| Open Source Code | No | The paper does not provide an explicit statement or link for open-source code for the described methodology. |
| Open Datasets | Yes | Datasets. To comprehensively evaluate Ada GLT across diverse datasets and tasks, we opt for Cora, Citeseer, and Pub Med (Kipf & Welling, 2017b) for node classification. For larger graphs, we choose Ogbn-Arxiv/Proteins/Products (Hu et al., 2020) for node classification and Ogbl-Collab for link prediction. |
| Dataset Splits | Yes | For node classification in small- and medium-scale datasets, following the semi-supervised settings (Chen et al., 2021b), we utilized 140 labeled data points (Cora), 120 (Citeseer), and 60 (Pub Med) for training, with 500 nodes allocated for validation and 1000 nodes for testing. ... The data splits for Ogbn-Ar Xiv, Ogbn-Proteins, Ogbn-Products, and Ogbl-Collab were provided by the benchmark (Hu et al., 2020). |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., CPU/GPU models, memory) used for running its experiments. |
| Software Dependencies | No | The paper does not provide specific software dependencies with version numbers (e.g., Python, PyTorch, CUDA versions). |
| Experiment Setup | Yes | We conclude the detailed hyperparameter settings in Tab. 5. Table 5: Detailed hyper-parameter configurations. ηg and ηθ denotes the coefficient attached to R(t A) and R(tθ), respectively. Dataset Model Epochs (train/retain) Optimizer learning rate Weight Decay ηg ηθ ω |