Label-invariant Augmentation for Semi-Supervised Graph Classification

Authors: Han Yue, Chunhui Zhang, Chuxu Zhang, Hongfu Liu

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In the semi-supervised scenario, we demonstrate our proposed method outperforms the classical graph neural network based methods and recent graph contrastive learning on eight benchmark graph-structured data, followed by several in-depth experiments to further explore the label-invariant augmentation in several aspects.
Researcher Affiliation Academia Han Yue Chunhui Zhang Chuxu Zhang Hongfu Liu Michtom School of Computer Science Brandeis University, Waltham, MA {hanyue,chunhuizhang,chuxuzhang,hongfuliu}@brandeis.edu
Pseudocode No The paper describes the methodology using text and mathematical equations, but it does not include a clearly labeled pseudocode or algorithm block.
Open Source Code Yes Our code is available at https://github.com/brandeis-machine-learning/GLA
Open Datasets Yes We select eight public graph classification benchmark datasets from TUDataset [27] for evaluation, including MUTAG [10], PROTEINS [1], DD [12], NCI1 [32], COLLAB [38], RDT-B [38], RDT-M5K [38], and GITHUB [29].
Dataset Splits Yes We evaluate the models with 10-fold cross-validation. We randomly shuffle a dataset and then evenly split it into 10 parts. Each fold corresponds to one part of data as the test set and another part as the validation set to select the best epoch, where the rest folds are used for training. We select 30%, 50%, 70% graphs from the training set as labeled graphs for each fold, then conduct semi-supervised learning.
Hardware Specification No The paper states: 'Did you include the total amount of compute and the type of resources used (e.g., type of GPUs, internal cluster, or cloud provider)? [No] We did not include memory or time consumption comparisons.' No specific hardware details are provided.
Software Dependencies No The paper mentions implementing networks 'by Py Torch' but does not provide specific version numbers for PyTorch or any other software dependencies.
Experiment Setup Yes We implement the networks based on Graph CL [40] by Py Torch, set the magnitude of perturbation η to 1.0, and the weight of classification loss α to 1.0, which is the same with Graph CL. We adopt Adam optimizer [20] to minimize the objective function in Eq. (9).