Self-supervised Graph Neural Networks via Low-Rank Decomposition

Authors: Liang Yang, Runjie Shi, Qiuliang Zhang, bingxin niu, Zhen Wang, Xiaochun Cao, Chuan Wang

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments demonstrate the superior performance and the robustness of LRD-GNNs. (Abstract), 4 Experiments (Section title).
Researcher Affiliation Academia 1School of Artificial Intelligence, Hebei University of Technology, Tianjin, China 2School of Artificial Intelligence, OPtics and Electro Nics (i OPEN), School of Cybersecurity, Northwestern Polytechnical University, Xi an, China 3School of Cyber Science and Technology, Shenzhen Campus of Sun Yat-sen University, Shenzhen, China 4Institute of Information Engineering, CAS, Beijing, China
Pseudocode Yes Algorithm 1: Matrix ADMM and Algorithm 2: Tensor ADMM in Appendix A.
Open Source Code No The paper does not include any explicit statements about releasing source code or provide a link to a code repository.
Open Datasets Yes Our experiments are conducted on 12 commonly used benchmark datasets, including 6 homophilic graph datasets (i.e., Cora, Cite Seer, Pub Med, Wiki-CS, Amazon Computers and Amazon Photo [26, 27, 28]) and 6 heterophilic graph datasets (i.e., Chameleon, Squirrel, Actor, Cornell, Texas, and Wisconsin [29]).
Dataset Splits Yes For Cora, Cite Seer, and Pub Med datasets, we adopt the public splits with 20 labeled nodes per class for training, 500 nodes for validation and 1000 nodes for testing. For Wiki-CS, Computers and Photo datasets, we randomly split all nodes into three parts, i.e., 10% nodes for training, 10% nodes for validation and the remaining 80% nodes for testing. The performance on heterophilic datasets is evaluated on the commonly used 48%/32%/20% training/validation/testing.
Hardware Specification No The paper states 'All methods were implemented in Pytorch with Adam Optimizer' but does not specify any hardware details like GPU or CPU models used for the experiments.
Software Dependencies No The paper mentions 'All methods were implemented in Pytorch with Adam Optimizer' but does not provide specific version numbers for PyTorch or any other software dependencies.
Experiment Setup Yes The hyper-parameter search space is: learning rate in {0.1, 0.05, 0.01}, dropout in {0.2, 0.3, 0.4}. Besides, early stopping with a patience of 200 epochs and L2 regularization with coefficient in {1E-2, 5E-3, 1E-3} are employed to prevent overfitting.