Graph Mixup with Soft Alignments
Authors: Hongyi Ling, Zhimeng Jiang, Meng Liu, Shuiwang Ji, Na Zou
ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We conduct systematic experiments to show that S-Mixup can improve the performance and generalization of graph neural networks (GNNs) on various graph classification tasks. |
| Researcher Affiliation | Academia | 1Department of Computer Science & Engineering, Texas A&M University, TX, USA 2Department of Engineering Technology & Industrial Distribution, Texas A&M University, TX, USA. |
| Pseudocode | Yes | Algorithm 1 Training algorithm", "Algorithm 2 Mixup algorithm |
| Open Source Code | Yes | Our code is publicly available as part of the DIG package (https://github.com/divelab/DIG). |
| Open Datasets | Yes | In this section, we evaluate the effectiveness of our method on six real-world datasets from the TUDatasets benchmark (Morris et al., 2020)2... We also conduct experiments on ogbg-molhiv, which is a large molecular graph dataset from OGB benchmark (Hu et al., 2020)3. |
| Dataset Splits | Yes | For the TUDatasetes benchmark, we randomly split the dataset into train/validation/test data by 80%/10%/10%. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., exact GPU/CPU models, processor types, or memory amounts) used for running its experiments. |
| Software Dependencies | No | The paper mentions using GCN and GIN models and the Adam optimizer, but does not provide specific software dependencies with version numbers (e.g., Python, PyTorch/TensorFlow, specific graph neural network libraries). |
| Experiment Setup | Yes | We use the Adam optimizer (Kingma & Ba, 2015) to train all models. See Table 8 for the hyperparameters of training the classification model. ... For the graph matching network used in S-Mixup, we set the hidden size as 256 and the readout layer as global sum pooling. For all six datasets, the graph matching network is trained for 500 epochs with a learning rate of 0.001. For the number of layers and batch size, see Table 9. |