Learning to Reweight for Generalizable Graph Neural Network

Authors: Zhengyu Chen, Teng Xiao, Kun Kuang, Zheqi Lv, Min Zhang, Jinluan Yang, Chengqiang Lu, Hongxia Yang, Fei Wu

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section, we describe the experimental setup used to evaluate the effectiveness of our proposed method. Experimental results demonstrate the effectiveness of our framework in comparison with different GNN backbones and datasets.
Researcher Affiliation Collaboration 1 Institute of Artificial Intelligence, Zhejiang University 2 Shanghai Institute for Advanced Study, Zhejiang University 3 The Pennsylvania State University 4 DAMA Academy, Alibaba Group.
Pseudocode No The paper describes algorithms and methods using equations and textual explanations, but it does not include a formally structured pseudocode block or an algorithm box.
Open Source Code No The paper does not contain an explicit statement about the release of its source code or a link to a code repository.
Open Datasets Yes We consider three graph classification benchmarks: COLLAB, PROTEINS, and D&D. (2)Open Graph Benchmark (OGB) (Hu et al. 2020). We consider OGBG-MOL TOX21, BACE, BBBP, CLINTOX, HIV, and ESOL as six graph property prediction datasets from OGB with distribution shifts.
Dataset Splits Yes This bi-level update aims to optimize the graph weights based on its validation for avoiding the over-fitting issues, where Ltrain(θ, W) and Lval (θ (W), W) are lower-level and higher-level objectives on the training and validation sets, respectively. These datasets are split based on graph size. We use scaffold splitting technique to separate graphs based on two-dimensional structural frameworks.
Hardware Specification No The paper does not provide specific details regarding the hardware (e.g., GPU/CPU models, memory) used for running the experiments.
Software Dependencies No The paper does not specify version numbers for any software dependencies or libraries used in the experiments.
Experiment Setup No The paper mentions learning rates (ηθ, ηW) as part of the bi-level training algorithm but does not provide specific numerical values for these or any other hyperparameters (e.g., batch size, number of epochs) used in its experimental setup.