Graph Adversarial Self-Supervised Learning
Authors: Longqi Yang, Liangliang Zhang, Wenjing Yang
NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental on ten graph classification datasets show that the proposed approach is superior to state-of-the-art self-supervised learning baselines, which are competitive with supervised models. |
| Researcher Affiliation | Academia | Longqi Yang Institute for Quantum Information & State Key Laboratory of High Performance Computing, College of Computer Science and Technology, National University of Defense Technology, Changsha 410073, China Defense Innovation Institute, Beijing 100071, China ... Liangliang Zhang Institute of Systems Engineering, AMS, Beijing, China ... Wenjing Yang Institute for Quantum Information & State Key Laboratory of High Performance Computing, College of Computer Science and Technology, National University of Defense Technology, Changsha 410073, China |
| Pseudocode | Yes | Algorithm 1 Graph Adversarial Self-Supervised Learning (GASSL) |
| Open Source Code | No | The paper does not provide any explicit statements or links to open-source code for the described methodology. |
| Open Datasets | Yes | We selected 10 widely used graph classification datasets from TU datasets [26] and Open Graph Benchmark (OGB) [27]. ... For OGB datasets, we evaluate the performance with their original feature extraction and following the original training/validation/test dataset splits [27]. |
| Dataset Splits | Yes | For all experiments on the TU dataset, we follow [17, 31] and report the mean 10-fold cross-validation accuracy with standard deviation after 5 runs followed by a linear SVM. ... For OGB datasets, we evaluate the performance with their original feature extraction and following the original training/validation/test dataset splits [27]. |
| Hardware Specification | Yes | We conduct all the experiments on an Nvidia TITAN Xp. |
| Software Dependencies | No | The paper mentions using Adam optimizer, GCN, and GIN, but does not provide specific version numbers for any software libraries or dependencies. |
| Experiment Setup | Yes | We train the model using Adam optimizer with an initial learning rate of 10 4, and we choose the number of GCN and GIN layers {2, 3, 4, 5}, number of epochs {20, 40, 100, 200}, batch size {32, 64, 128, 256, 512, 1024}, and the SVM parameter C {10 3, 10 2, . . . , 102, 103}. The step size α is set to 8 10 3, the perturbation bound ϵ is set to 8 10 3, the embedding dimension is set to 128 (expect HIV set to 512). We also use early stopping with the patience of 20, where we stop training if there is no further improvement on the validation loss during 20 epochs. |