RGE: A Repulsive Graph Rectification for Node Classification via Influence
Authors: Jaeyun Song, Sungyub Kim, Eunho Yang
ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Empirically, we demonstrate that RGE consistently outperforms existing methods on the various benchmark datasets. |
| Researcher Affiliation | Collaboration | 1Graduate School of Artificial Intelligence, Korea Advanced Institute of Science and Technology (KAIST), Daejeon, Korea 2AITRICS, Seoul, Korea. |
| Pseudocode | Yes | Algorithm 1 Repulsive edge Group Elimination (RGE) |
| Open Source Code | Yes | Code will be available at https://github.com/ Jaeyun-Song/RGE.git |
| Open Datasets | Yes | We demonstrate the effectiveness of RGE on various benchmark datasets, including citation networks (Sen et al., 2008), commercial graphs (Shchur et al., 2018), Web KB4, Actor network (Pei et al., 2020), and Wikipedia networks (Rozemberczki et al., 2021). |
| Dataset Splits | Yes | We assume nodes are divided into three different subsets for training and evaluation: train nodes VTr for training parameters, validation nodes VVal for selecting hyperparameters and eliminating edges, and test nodes VTest for evaluation. |
| Hardware Specification | No | The paper does not provide specific details regarding the hardware used for experiments, such as CPU or GPU models, memory, or cloud computing instance types. |
| Software Dependencies | No | The paper mentions models like SGC, GCN, GAT, and the Adam optimizer, but does not specify software libraries or frameworks with their version numbers (e.g., Python, PyTorch, TensorFlow versions). |
| Experiment Setup | Yes | We train 2-hop SGC (Wu et al., 2019) for 200 epochs, while the 2-layer GCN and GAT are trained for 2000 epochs. We employ Adam optimizer (Kingma & Ba, 2015) with a learning rate of 0.2 for SGC except for Cite Seer (Sen et al., 2008) (0.5) and 0.01 for GCN and GAT. We choose the weight decay among 100 values ranging from 0.9 to 10 10 in the log scale according to the validation accuracy for SGC. For GCN and GAT, we select the weight decay from {10 3, 10 4, 10 5, 10 6, 10 7} due to higher computational costs compared to SGC. We utilize dropout (Srivastava et al., 2014) of 0.5 in each layer of GCN and GAT. We adopt the hidden dimensions of 32 and 64 for GCN and GAT, respectively, and use eight heads for GAT. |