MAG-GNN: Reinforcement Learning Boosted Graph Neural Network

Authors: Lecheng Kong, Jiarui Feng, Hao Liu, Dacheng Tao, Yixin Chen, Muhan Zhang

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conduct extensive experiments on many datasets, showing that MAG-GNN achieves competitive performance to state-of-the-art methods and even outperforms many subgraph GNNs.
Researcher Affiliation Collaboration Lecheng Kong1 Jiarui Feng1 Hao Liu1 Dacheng Tao2 Yixin Chen1 Muhan Zhang3 {jerry.kong, feng.jiarui, liuhao, ychen25}@wustl.edu, dacheng.tao@gmail.com, muhan@pku.edu.cn 1Washington University in St. Louis 2JD Explore Academy 3Peking University
Pseudocode Yes Algorithm 1 RL-Experience Algorithm 2 ORD-Train Algorithm 3 SIMUL-Train Algorithm 4 PRE-Train
Open Source Code Yes 1The code can be found at https://github.com/Lecheng Kong/MAG-GNN
Open Datasets Yes We use the QM9 dataset provided by Pytorch-Geometric [9], and we use a train/valid/test split ratio of 0.8/0.1/0.1. ... We use the ZINC dataset provided by Pytorch-Geometric [9] and use the official split. We take OGBG-MOLHIV dataset from the Open Graph Benchmark package [14] and use their official split.
Dataset Splits Yes We use the QM9 dataset provided by Pytorch-Geometric [9], and we use a train/valid/test split ratio of 0.8/0.1/0.1.
Hardware Specification No The paper does not explicitly describe the specific hardware (e.g., CPU/GPU models, memory) used for running its experiments. It mentions software implementations but not hardware.
Software Dependencies No The paper states 'All models are implemented in DGL [29] and Py Torch [25].' However, it does not provide specific version numbers for these software dependencies, which is required for reproducibility.
Experiment Setup Yes We summarize the hyperparameters used for different datasets in Table 8 and 9.