Hierarchical Diffusion Scattering Graph Neural Network
Authors: Ke Zhang, Xinyan Pu, Jiaxing Li, Jiasong Wu, Huazhong Shu, Youyong Kong
IJCAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We benchmark our model on nine real-world networks on the transductive semisupervised node classification task. The experimental results demonstrate the effectiveness of our method. |
| Researcher Affiliation | Academia | Jiangsu Provincal Joint International Research Laboratory of Medical Information Processing, School of Computer Science and Engineering, Southeast University, Nanjing, China {kylenz, 220201976, jiaxing li, jswu, shu.list, kongyouyong}@seu.edu.cn |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks, only mathematical equations and descriptive text. |
| Open Source Code | Yes | The codes are available at https://github.com/Anfankus/hds-gnn |
| Open Datasets | Yes | Datasets We choose nine benchmarks for experiments: (1) four citation networks: Cora, Citeseer, Pubmed [Sen et al., 2008] and DBLP [Bojchevski and G unnemann, 2018]; (2) two co-purchase networks: Amazon Computers and Amazon Photo [Shchur et al., 2018]; (3) one co-authorship network: Coauthor CS [Shchur et al., 2018]. (4) two Web KB networks: Cornell and Texas [Pei et al., 2019]. |
| Dataset Splits | Yes | We use sparse splitting (20 per class/500/1000) [Kipf and Welling, 2017] for citation networks, co-purchase networks and co-authorship network, and use dense splitting (60%/20%/20%) [Pei et al., 2019] for Web KB networks. |
| Hardware Specification | Yes | All the experiments run in Py Torch on NVIDIA 3090. |
| Software Dependencies | No | The paper mentions 'Py Torch' and 'pyg' (PyTorch Geometric) as software used, but does not provide specific version numbers for these dependencies. |
| Experiment Setup | Yes | We use the Adam as the training optimizer and the tool hyperopt [Bergstra et al., 2013] for hyper-parameter searching. We set the maximum training epoch to 300 and use early stopping when validation loss does not decrease for consecutive 20 epochs. The learned weights of models used for testing are the checkpoint which has the lowest validation loss in training progress. |