Metropolis-Hastings Data Augmentation for Graph Neural Networks
Authors: Hyeonjin Park, Seunghun Lee, Sihyeon Kim, Jinyoung Park, Jisu Jeong, Kyung-Min Kim, Jung-Woo Ha, Hyunwoo J. Kim
NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our extensive experiments demonstrate that MH-Aug can generate a sequence of samples according to the target distribution to significantly improve the performance of GNNs. |
| Researcher Affiliation | Collaboration | Korea University1, NAVER CLOVA2, NAVER AI LAB3 |
| Pseudocode | Yes | Algorithm 1 Metropolis-Hastings Data Augmentation (MH-Aug) Framework |
| Open Source Code | No | The paper does not provide any statement or link indicating that the source code for their methodology is publicly available. |
| Open Datasets | Yes | Datasets. We evaluate our method on five benchmark datasets in three categories: (1) Citation networks: CORA and CITESEER [31], (2) Amazon product networks: Computers and Photo [32], and (3) Coauthor Networks: CS [32]. |
| Dataset Splits | Yes | We follow the standard data split protocol in the transductive settings for node classification, e.g., [4] for CORA and CITESEER and [32] for the rest. |
| Hardware Specification | No | The paper does not provide specific hardware details such as GPU models, CPU types, or memory specifications used for running experiments. |
| Software Dependencies | No | The paper does not provide specific software dependencies with version numbers (e.g., library names with their exact versions). |
| Experiment Setup | No | While hyperparameters are mentioned (e.g., 'λs are hyperparamters'), the paper does not provide concrete values for these or other training configurations (like learning rate, batch size, epochs) in the provided text, nor a dedicated 'Experimental Setup' section detailing these settings. |