Revisiting Self-Supervised Heterogeneous Graph Learning from Spectral Clustering Perspective
Authors: YUJIE MO, Zhihe Lu, Runpeng Yu, Xiaofeng Zhu, Xinchao Wang
NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results affirm the superiority of our method, showcasing remarkable improvements in several downstream tasks compared to existing methods. |
| Researcher Affiliation | Academia | 1School of Computer Science and Engineering, University of Electronic Science and Technology of China 2National University of Singapore |
| Pseudocode | Yes | Algorithm 1 The pseudo-code of the proposed method. |
| Open Source Code | Yes | The code of the proposed method is released at https://github.com/Yujie Mo/SCHOOL. |
| Open Datasets | Yes | We use four public heterogeneous graph datasets and two public homogeneous graph datasets from various domains. Heterogeneous graph datasets include three academic datasets (i.e., ACM [56], DBLP [56], and Aminer [11]), and one business dataset (i.e., Yelp [27]). Homogeneous graph datasets include two sale datasets (i.e., Photo and Computers [43]). |
| Dataset Splits | No | Table 3 provides '#Training' and '#Test' splits for the datasets but does not explicitly mention 'validation' splits. |
| Hardware Specification | Yes | All experiments were implemented in Py Torch and conducted on a server with 8 NVIDIA Ge Force 3090 (24GB memory each). |
| Software Dependencies | No | The paper mentions that experiments were 'implemented in Py Torch' but does not specify a version number for Py Torch or any other software dependencies. |
| Experiment Setup | Yes | In the proposed method, all parameters were optimized by the Adam optimizer [19] with an initial learning rate. Moreover, We use early stopping with a patience of 30 to train the proposed SHGL model. We report the settings for the dimensions of encoders in Table 5. |