Simple Unsupervised Graph Representation Learning

Authors: Yujie Mo, Liang Peng, Jie Xu, Xiaoshuang Shi, Xiaofeng Zhu7797-7805

AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results on various real-world datasets demonstrate the effectiveness and efficiency of our method, compared to state-of-the-art methods. The source codes are released at https://github.com/Yujie Mo/SUGRL. Comprehensive empirical studies on 8 public benchmark datasets verifies the effectiveness and efficiency of our method, compared to 11 comparison methods, in terms of node classification.
Researcher Affiliation Academia 1School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu 611731, China 2Shenzhen Institute for Advanced Study, University of Electronic Science and Technology of China, Shenzhen 518000, China
Pseudocode No The paper describes the proposed method using descriptive text and mathematical equations, along with a flowchart in Figure 2. However, it does not include any formal pseudocode blocks or algorithms.
Open Source Code Yes The source codes are released at https://github.com/Yujie Mo/SUGRL.
Open Datasets Yes In our experiments, we used 8 commonly used benchmark datasets, including 3 citation networks datasets (i.e., Cora, Citeseer, and Pubmed) (Yang, Cohen, and Salakhudinov 2016), 2 amazon sale datasets (i.e., Photo, and Computers) (Shchur et al. 2018), 3 large-scale datasets (i.e., Ogbn-arxiv, Ogbn-mag, and Ogbn-products) (Weihua et al. 2020).
Dataset Splits No The paper states: 'For the node classification task, we follow the standard linear evaluation protocol in DGI.' While this implies the use of train/validation/test splits as part of a standard protocol, the paper itself does not provide specific details about these splits (e.g., percentages or counts) within its text.
Hardware Specification Yes All experiments were implemented in Py Torch and conducted on a server with 8 NVIDIA Ge Force 3090 (24GB memory each).
Software Dependencies No The paper states that experiments were 'implemented in Py Torch' but does not provide a specific version number for PyTorch or any other software libraries used, which is required for reproducible software dependencies.
Experiment Setup Yes In SUGRL, all parameters were initialized by the Glorot initialization (Glorot and Bengio 2010) and optimized by the Adam optimizer (Kingma and Ba 2015). For the optimizer, we set the initial learning rate during the range of [0.001, 0.01] and the weight decay within [0, 0.0001] for all datasets, respectively. We applly the Re LU function (Nair and Hinton 2010) as a nonlinear activation for each layer and conduct the row normalization on input features. Moreover, a dropout function is applied behind each layer. We investigate the impact of hyper-parameters in SUGRL, i.e., α and β in Eq. (11) as well as ω1 and ω2 in Eq. (12). We conduct node classification by varying the values of α and β from 0.1 to 0.9... We also conduct node classification... by varying the values of ω1 and ω2 from 10 3 to 103, and fix the weight of LU to 1...