Deep Graph Structural Infomax
Authors: Wenting Zhao, Gongping Xu, Zhen Cui, Siqiang Luo, Cheng Long, Tong Zhang
AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on different types of datasets demonstrate the effectiveness and superiority of the proposed method. |
| Researcher Affiliation | Academia | Wenting Zhao1, Gongping Xu1, Zhen Cui1*, Siqiang Luo2, Cheng Long2, Tong Zhang1 1Nanjing University of Science and Technology, Nanjing, China. 2Nanyang Technological University, Singapore. |
| Pseudocode | No | No pseudocode or algorithm block was found in the paper. |
| Open Source Code | Yes | 1https://github.com/wtzhao1631/dgsi |
| Open Datasets | Yes | We conduct experiments on six real-world node classification datasets: Cora, Pubmed, Citeseer (Kipf and Welling 2016a), Cora-Full, Amazon Photo and Amazon Computers (Shchur et al. 2018). |
| Dataset Splits | Yes | The first one is to follow DGI, adopting a widely-used train/validation/test set on the Cora, Citeseer, and Pubmed datasets, where 20 training samples per class. The second one randomly samples [1, 5] labeled data per class to train the network, which is a label scarcity setting, and all six datasets are evaluated under this setting. |
| Hardware Specification | No | No specific hardware details (e.g., GPU/CPU models, memory, cloud instance types) used for running experiments are explicitly mentioned in the paper. |
| Software Dependencies | No | The paper mentions 'a two-layer GCN is adopted' and 'Prelu is leveraged as a nonlinear activation function', but does not provide specific version numbers for these or any other software dependencies. |
| Experiment Setup | Yes | A two-layer GCN is adopted as the basic framework for self-supervised learning on graphs, and the outputs of the first and second layer are summed as the learned node representation. Both the number of hidden units and the dimension of learned representation are set as 512. Prelu is leveraged as a nonlinear activation function. The learning rate is 0.0005 for all datasets. We determine the weight of each term in the objective by grid search... Besides, we also set an early stopping with patience as 20. |