Graph Self-supervised Learning with Accurate Discrepancy Learning

Authors: Dongki Kim, Jinheon Baek, Sung Ju Hwang

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We validate our D-SLA on various graph-related downstream tasks, including molecular property prediction, protein function prediction, and link prediction tasks, on which ours largely outperforms relevant baselines. In this section, we first experimentally validate the proposed Discrepancy-based Self-supervised Le Arning (D-SLA) on graph classification tasks to verify its effectiveness in obtaining accurate graph-level representations.
Researcher Affiliation Collaboration Dongki Kim1 , Jinheon Baek1 , Sung Ju Hwang1,2 KAIST1, AITRICS2, South Korea
Pseudocode No The paper does not contain any explicitly labeled 'Pseudocode' or 'Algorithm' blocks.
Open Source Code Yes 1Code is available at https://github.com/Dongki Kim95/D-SLA
Open Datasets Yes For the self-supervised learning, we use 2M molecules from the ZINC15 dataset [26]. Then, after the self-supervised learning, we perform fine-tuning on datasets from Molecule Net [34] to evaluate the down-stream performances of models. For pre-training and fine-tuning, we use the dataset of PPI networks [47].
Dataset Splits Yes We separate the dataset into four parts: pre-training, training, validation, and test sets in the ratio of 5:1:1:3.
Hardware Specification No No specific hardware details (like GPU/CPU models or types) used for experiments are mentioned in the provided text.
Software Dependencies No The paper mentions using GIN [38] and GCN [15] as base networks, but does not specify any software names with version numbers for libraries, frameworks, or programming languages (e.g., Python, PyTorch, TensorFlow versions).
Experiment Setup Yes For pre-training of our model, we perturb the original graph three times. To be specific, to obtain the perturbed graphs from the original graph, we first randomly select a subgraph of it, and then add or remove edges in the range of {20%, 40%, 60%} over the sampled subgraph. For fine-tuning, we follow the hyperparameters from You et al. [43]. For perturbation, we add or delete only a tiny amount of edges (e.g., 1 or 2 edges) while increasing the magnitude of perturbation to obtain three perturbed graphs.