A Flexible Generative Framework for Graph-based Semi-supervised Learning
Authors: Jiaqi Ma, Weijing Tang, Ji Zhu, Qiaozhu Mei
NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We conduct thorough experiments on benchmark datasets for graph-based semi-supervised learning. Results show that the proposed methods outperform the state-of-the-art models in most settings. |
| Researcher Affiliation | Academia | Jiaqi Ma jiaqima@umich.edu School of Information, University of Michigan; Weijing Tang weijtang@umich.edu School of Information, University of Michigan; Ji Zhu jizhu@umich.edu Department of Statistics, University of Michigan; Qiaozhu Mei qmei@umich.edu Department of EECS, University of Michigan |
| Pseudocode | No | The paper describes the generative framework and its instantiations in prose, but does not include any structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper uses and cites the PyTorch-Geometric library for GCN and GAT implementations and datasets, but does not provide concrete access to the source code for the novel G3NN methodology described in this paper. |
| Open Datasets | Yes | We use three standard semi-supervised learning benchmark datasets for graph neural networks, Citeseer, Cora, and Pubmed [19, 23]. We adopt these datasets from the Py Torch-Geometric library [4] in our experiments. |
| Dataset Splits | Yes | We closely follow the dataset setup in Yang et al. [23] and Kipf and Welling [10]. We apply early stopping with the cross-entropy loss on the validation set. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU/CPU models, memory) used for running its experiments. |
| Software Dependencies | No | The paper mentions using "Py Torch-Geometric [4] library" but does not specify its version or any other software dependencies with version numbers. |
| Experiment Setup | Yes | We grid search the number of hidden units from (16, 32, 64) and the learning rate from (0.001, 0.005, 0.01). GAT uses a multi-head attention mechanism. In our experiments, we fix the number of heads as 8 and try to set the total number of hidden units as (16, 32, 64) and to set the number of hidden units of a single head as (16, 32, 64). For the proposed generative models, we grid search the coefficient of the supervised loss η from (0.5, 1, 10). The number of negative edges is set to be the number of the observed edges in the graph. For LSM models, the dimensions of the feature transformation matrix U is fixed to 8 d, where d is the feature size. For SBM models, we use two settings of (p0, p1): (0.9, 0.1) and (0.5, 0.6). We use Adam optimizer to train all the models and apply early stopping with the cross-entropy loss on the validation set. |