Multi-task Self-supervised Graph Neural Networks Enable Stronger Task Generalization
Authors: Mingxuan Ju, Tong Zhao, Qianlong Wen, Wenhao Yu, Neil Shah, Yanfang Ye, Chuxu Zhang
ICLR 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We conduct comprehensive experiments over four downstream tasks (i.e., node classification, node clustering, link prediction, and partition prediction), and our proposal achieves the best overall performance across tasks on 11 widely adopted benchmark datasets. |
| Researcher Affiliation | Collaboration | Mingxuan Ju1, Tong Zhao2, Qianlong Wen1, Wenhao Yu1, Neil Shah2, Yanfang Ye1 , Chuxu Zhang3 1University of Notre Dame, 2Snap Inc., 3Brandeis University 1{mju2,yye7}@nd.edu; 2{tzhao,nshah}@snap.com; 3chuxuzhang@brandeis.edu |
| Pseudocode | No | The paper describes mathematical formulations and iterative processes (e.g., Equation (5) and MGDA), but it does not include a clearly labeled pseudocode or algorithm block. |
| Open Source Code | Yes | Our code is publicly available at https://github.com/jumxglhf/Pareto GNN. |
| Open Datasets | Yes | We conduct comprehensive experiments on 11 real-world benchmark datasets extensively explored by the graph community... The detailed description of these datasets can be found in Appendix C. For Wiki-CS, Pubmed, Amazon-Photo, Amazon-Computer, Coauthor-CS, and Coauthor-Physics, we use the API from Deep Graph Library (DGL)1 to load the datasets. For ogbn-arxiv and ogbn-products, we use the API from Open Graph Benchmark (OGB)2. For Chameleon, Actor and Squirrel, the datasets are downloaded from the official repository of Geom-GCN (Pei et al., 2019)3. |
| Dataset Splits | Yes | For datasets whose public splits are available (i.e., ogbn-arxiv, and ogbn-products), we utilize their given public splits for the evaluations on node classification, node clustering and partition prediction. Whereas for other datasets, we explore a random 10%/10%/80% split for the train/validation/test split, following the same setting as explored by other literature. |
| Hardware Specification | Yes | We conduct experiments on a server having one RTX3090 GPU with 24 GB VRAM. The CPU we have on the server is an AMD Ryzen 3990X with 128GB RAM. |
| Software Dependencies | Yes | The software we use includes DGL 1.9.0 and Py Torch 1.11.0. |
| Experiment Setup | Yes | The hyper-parameters for PARETOGNN across all datasets are listed in Table 5. |