Beyond Graph Convolutional Network: An Interpretable Regularizer-Centered Optimization Framework

Authors: Shiping Wang, Zhihao Wu, Yuhong Chen, Yong Chen

AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on eight public datasets demonstrate that ts GCN achieves superior performance against quite a few state-of-the-art competitors w.r.t. classification tasks.
Researcher Affiliation Academia 1College of Computer and Data Science, Fuzhou University, China 2Fujian Provincial Key Laboratory of Network Computing and Intelligent Information Processing, Fuzhou University, China 3School of Computer Science, Beijing University of Posts and Telecommunications, China
Pseudocode Yes Algorithm 1: Topological and Semantic Regularized GCN
Open Source Code Yes Its code is available at https://github.com/Zhihao Wu99/ts GCN and the supplementary material is uploaded to https://arxiv.org/abs/ 2301.04318.
Open Datasets Yes Datasets Cora, Citeseer and Pubmed are citation networks, and Cora Full is a larger version of Cora; ACM is a paper network, and Blog Catalog and Flickr are social networks; UAI has been utilized for community detection. The detailed statistics of the above eight public datasets are concluded in Table 2.
Dataset Splits Yes For all experiments, we randomly split samples into a small set of 20 labeled samples per class for training, a set of 500 samples for validating and a set of 1, 000 samples for testing.
Hardware Specification No The paper does not provide specific details about the hardware (e.g., CPU, GPU models, memory) used for the experiments.
Software Dependencies No The paper mentions implementing in PyTorch and generally discusses software like GCNs, but it does not specify exact version numbers for any software dependencies like Python, PyTorch, or CUDA.
Experiment Setup Yes With respect to ts GCN, the learning rate, weight decay and the size of hidden units are set to 1 10 2, 5 10 4 and 32, respectively. The hyperparameters α and β are selected in {0.1, 0.2, . . . , 1.0} for different datasets, and r is chosen in { N /210 , . . . , N /23 }, where N is the number of nodes.