Gradient Rewiring for Editable Graph Neural Network Training
Authors: Zhimeng Jiang, Zirui Liu, Xiaotian Han, Qizhang Feng, Hongye Jin, Qiaoyu Tan, Kaixiong Zhou, Na Zou, Xia Hu
NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments demonstrate the effectiveness of GRE on various model architectures and graph datasets in terms of multiple editing situations. |
| Researcher Affiliation | Academia | Zhimeng Jiang1, Zirui Liu2, Xiaotian Han3, Qizhang Feng1, Hongye Jin1, Qiaoyu Tan4, Kaixiong Zhou5, Na Zou6, Xia Hu7 1Texas A&M University, 2University of Minnesota, 3Case Western Reserve University, 4NYU Shanghai, 5North Carolina State University, 6University of Houston, 7Rice University |
| Pseudocode | Yes | Algorithm 1 Gradient Rewiring Editable (GRE) Graph Neural Networks Training Algorithm 2 Gradient Rewiring Editable Plus (GRE+) Graph Neural Networks Training |
| Open Source Code | Yes | The source code is available at https://github.com/zhimengj0326/Gradient_rewiring_editing. |
| Open Datasets | Yes | In our experiments, we utilize a selection of eight graph datasets from diverse domains, split evenly between small-scale and large-scale datasets. The small-scale datasets include Cora, A-computers [29], A-photo [29], and Coauthor-CS [29]. On the other hand, the large-scale datasets encompass Reddit [25], Flickr [2], ogbn-arxiv [3], and ogbn-products [3]. |
| Dataset Splits | Yes | Specifically, we first randomly split the train/validation/test dataset. Then, we ensure that each class has 20 samples in the training and 30 samples in the validation sets. The remaining samples are used for the test set. |
| Hardware Specification | Yes | For hardware configuration, all experiments are executed on a server with 251GB main memory, 24 AMD EPYC 7282 16-core processor CPUs, and a single NVIDIA Ge Force-RTX 3090 (24GB). |
| Software Dependencies | Yes | For software configuration, we use CUDA=11.3.1, python=3.8.0, pytorch=1.12.1, higher=0.2.1, torch-geometric=1.7.2, torch-sparse=0.6.16 in the software environment. |
| Experiment Setup | Yes | The hyperparameters for model architecture, learning rate, dropout rate, and training epochs are shown in Table 4. |