Distributed-Order Fractional Graph Operating Network

Authors: Kai Zhao, Xuhao Li, Qiyu Kang, Feng Ji, Qinxu Ding, Yanan Zhao, Wenfei Liang, Wee Peng Tay

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conduct empirical evaluations across a range of graph learning tasks. The results consistently demonstrate superior performance when compared to traditional continuous GNN models.
Researcher Affiliation Academia 1Nanyang Technological University, 2Anhui University, 3 Singapore University of Social Sciences
Pseudocode No The paper describes methods and formulas but does not include structured pseudocode or algorithm blocks (e.g., a clearly labeled 'Algorithm' section).
Open Source Code Yes The implementation code is available at https://github.com/zknus/NeurIPS-2024-DRAGON.
Open Datasets Yes For our evaluation on homophilic datasets, we leverage a diverse set of datasets including citation networks (Cora [48], Citeseer [49], Pubmed [50]), tree-structured datasets (Disease and Airport [51]), as well as coauthor and co-purchasing graphs (Coauthor CS [52], Computer and Photo [53]).
Dataset Splits Yes For the Disease and Airport datasets, we follow the data partitioning and preprocessing procedures as described in [51]. For all other datasets, we adopt random splits for the largest connected component (LCC), in line with the approach detailed in [7]. ...we follow the data splitting strategy described in [66], dividing the data into 50% for training, 25% for validation, and 25% for testing.
Hardware Specification Yes All experiments are conducted on NVIDIA GeForce RTX 3090 or A5000 GPUs with 24GB of memory.
Software Dependencies No We generate the data through an open source package Fractional Diff Eq.jl (https://scifracx.org/Fractional Diff Eq.jl/stable/) that is totally driven by Julia and licensed with MIT License. While a package is mentioned, specific version numbers for all key software components are not provided in the text.
Experiment Setup Yes The hyperparameters employed in Table 4 are detailed in Table 18. ... Dataset Model lr weight decay indrop dropout hidden dim time step size Roman-empire D-CDE 0.005 0.0001 0.4 0.2 80 4 0.2