Self-Supervised Graph Neural Networks via Diverse and Interactive Message Passing
Authors: Liang Yang, Cheng Chen, Weixun Li, Bingxin Niu, Junhua Gu, Chuan Wang, Dongxiao He, Yuanfang Guo, Xiaochun Cao4327-4336
AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive evaluations on node-level and graph-level tasks demonstrate the superiority of DIMP on improving performance and overcoming over-smoothing issue. |
| Researcher Affiliation | Academia | 1School of Artificial Intelligence, Hebei University of Technology, Tianjin, China 2State Key Laboratory of Information Security, IIE, CAS, Beijing, China 3College of Intelligence and Computing, Tianjin University, Tianjin, China 4State Key Laboratory of Software Development Environment, Beihang University, China 5School of Cyber Science and Technology, Shenzhen Campus, Sun Yat-sen University, Shenzhen 518107, China |
| Pseudocode | No | The paper describes its method using mathematical equations and text, but it does not include a clearly labeled pseudocode or algorithm block. |
| Open Source Code | No | The paper does not contain any statement about releasing source code or provide a link to a code repository for the described methodology. |
| Open Datasets | Yes | Citation Networks. Cora, Citeseer, and Pubmed, which are widely used to verify GNNs, are standard citation network benchmark datasets (Sen et al. 2008; Namata et al. 2012). Co-purchase Networks. Amazon-C and Amazon-P are two networks of co-purchase relationships (Shchur et al. 2019). Coauthor Networks. Coauthor CS and Coauthor-P are co-author networks based on the Microsoft Academic Graph from the KDD Cup 2016 challenge. Statistics of datasets used for graph-level tasks are shown in Table ??. These datasets are from (Yanardag and Vishwanathan 2015). |
| Dataset Splits | Yes | On citation networks, we use 20 labeled nodes per class as the training set, 20 nodes per class as the validation set, and the rest as the testing set as in (Yang, Cohen, and Salakhutdinov 2016). On co-purchase and co-author networks, we use 30 labeled nodes per class as the training set, 30 nodes per class as the validation set, and the rest as the testing set. |
| Hardware Specification | No | The paper does not specify any hardware details (e.g., GPU models, CPU types, or cloud computing instances) used for running the experiments. |
| Software Dependencies | No | The paper mentions 'Adam optimizer' and 'Xavier initialization' but does not provide specific version numbers for these or any other software dependencies like programming languages or libraries. |
| Experiment Setup | Yes | The proposed DIMP employs 4layers message passing network. The parameters in the mapping function and the discriminator function are initialized using Xavier initialization and trained using Adam optimizer with an initial learning rate of 0.001. The number of epochs and batch size are chose from [10, 20, 40, 100] and [32, 64, 128, 256], respectively. Besides, early stopping with a patience of 20 is also utilized. For classification tasks, the parameter C of the SVM is chosen from [10 3, 10 2, , 102, 103]. |