Facilitating Graph Neural Networks with Random Walk on Simplicial Complexes

Authors: Cai Zhou, Xiyuan Wang, Muhan Zhang

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments verify the effectiveness of our random walk-based methods. ... In this section, we present a comprehensive ablation study on Zinc-12k to investigate the effectiveness of our proposed methods. We also verify the performance on graph-level OGB benchmarks. Due to the limited space, experiments on synthetic datasets and more real-world datasets as well as experimental details are presented in Appendix E.
Researcher Affiliation Academia Cai Zhou Tsinghua University zhouc20@mails.tsinghua.edu.cn Xiyuan Wang Peking University wangxiyuan@pku.edu.cn Muhan Zhang Peking University muhan@pku.edu.cn
Pseudocode No The paper does not contain pseudocode or clearly labeled algorithm blocks.
Open Source Code Yes Code is available at https://github.com/zhouc20/Hodge Random Walk.
Open Datasets Yes Zinc-12k [17] is a popular real-world dataset containing 12k molecules. ... ogbg-molhiv and ogbg-molpcba are from Open Graph Benchmark [26]... PCQM-Contact, Peptides-func and Peptides-struct are from Long-range Graph Benfchmark [19].
Dataset Splits Yes We follow the common predefined 10K/1K/1K train/validation/test split. ... we follow the standard dataset splits as the original image classification datasets, i.e., 55K/5K/10K for MNIST and 45K/5K/10K for CIFAR10 of train/validation/test graphs, respectively.
Hardware Specification Yes For example, the average generation times on Zinc computed by RTX3090 are: RWSE (23s), Edge RWSE (32s), Hodge1Lap (28s).
Software Dependencies No The paper mentions several deep learning models and frameworks used (e.g., GINE, GAT, GPS), but does not specify programming languages (like Python) or library versions (like PyTorch version numbers) required for replication.
Experiment Setup Yes To verify that our methods are capable of improving the performance of the base models, all hyperparameters including training configuration and model hyperparameters are set the same as in [38]. For the edge PE/SE, we keep the embedding dimensions the same as the node PE/SE in GPS models.