Wiener Graph Deconvolutional Network Improves Graph Self-Supervised Learning

Authors: Jiashun Cheng, Man Li, Jia Li, Fugee Tsung

AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experimental results on various datasets demonstrate the effectiveness of our approach. Empirically, our proposed WGDN achieves better results over a wide range of state-of-the-art benchmarks of graph SSL with efficient computational cost.
Researcher Affiliation Academia Jiashun Cheng2, Man Li2, Jia Li1,2*, Fugee Tsung1,2 1The Hong Kong University of Science and Technology (Guangzhou) 2The Hong Kong University of Science and Technology {jchengak, mlicn}@connect.ust.hk, {jialee, season}@ust.hk
Pseudocode No The paper does not contain any clearly labeled pseudocode or algorithm blocks.
Open Source Code No The paper does not provide an explicit statement about releasing its source code or a link to a code repository for the described methodology.
Open Datasets Yes We conduct experiments on both node-level and graph-level representation learning tasks with benchmark datasets across different scales and domains, including Pub Med (Sen et al. 2008), Amazon Computers, Photo (Shchur et al. 2018), Coauthor CS, Physics (Shchur et al. 2018), and IMDB-B, IMDB-M, PROTEINS, COLLAB, DD, NCI1 from TUDataset (Morris et al. 2020).
Dataset Splits Yes For node classification training, we use the public split for Pub Med and follow 10/10/80% random split for the rest. For graph classification, we feed the graph representation into a linear SVM, and report the mean 10-fold cross-validation accuracy with standard deviation after 5 runs (Xu et al. 2021).
Hardware Specification No The paper mentions 'GPU overhead' and 'memory requirement' in Table 4, implying the use of GPUs, but does not specify the exact GPU model or other hardware components used for experiments.
Software Dependencies No The paper does not specify any software or library names along with their version numbers (e.g., 'Python 3.8', 'PyTorch 1.9').
Experiment Setup Yes For fair comparisons, we set the embedding size of all models as 512 and follow their suggested hyper-parameters settings. For spectral filter, we consider heat kernel gc(λi) = e tλi with diffusion time t = 1 and PPR kernel gc(λi) = α 1 (1 α)(1 λi) with teleport probability α = 0.2. Further details of model configurations (e.g., hyper-parameters selection) can be found in Appendix F.2.