Self-supervised Graph-level Representation Learning with Local and Global Structure
Authors: Minghao Xu, Hang Wang, Bingbing Ni, Hongyu Guo, Jian Tang
ICML 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on both chemical and biological benchmark data sets demonstrate the effectiveness of the proposed approach. |
| Researcher Affiliation | Academia | 1Shanghai Jiao Tong University 2National Research Council Canada 3Mila Quebec AI Institute 4CIFAR AI Research Chair 5HEC Montr eal. |
| Pseudocode | Yes | Algorithm 1 Optimization Algorithm of Graph Lo G. |
| Open Source Code | No | The paper mentions checking 'released source code' for other methods but does not provide a link or statement for its own source code. |
| Open Datasets | Yes | In specific, a subset of ZINC15 database (Sterling & Irwin, 2015) with 2 million unlabeled molecules is employed for self-supervised pre-training. Eight binary classification data sets in Molecule Net (Wu et al., 2018) serve as downstream tasks... |
| Dataset Splits | Yes | Eight binary classification data sets in Molecule Net (Wu et al., 2018) serve as downstream tasks, where the scaffold split scheme (Chen et al., 2012a) is used for data set split. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU/CPU models, memory, or cloud instance types) used for running its experiments. |
| Software Dependencies | No | The paper mentions optimizers (Adam) and networks (GIN) but does not provide specific software dependencies with version numbers (e.g., Python, PyTorch, or other libraries with versions). |
| Experiment Setup | Yes | We use an Adam optimizer (Kingma & Ba, 2015) (learning rate: 1e-3) to pre-train the GNN... Unless otherwise specified, the batch size N is set as 512, and the hierarchical prototypes depth Lp is set as 3. For fine-tuning... an Adam optimizer (learning rate: 1e-3, fine-tuning batch size: 32) is employed to train the model for 100 epochs. |