Generalized Equivariance and Preferential Labeling for GNN Node Classification

Authors: Zeyu Sun, Wenjie Zhang, Lili Mou, Qihao Zhu, Yingfei Xiong, Lu Zhang8395-8403

AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results show that we achieve high performance in several unattributed node classification tasks. We evaluated our approach on two unattributed node classification applications, maximum independent set solving (MIS) and propositional satisfiability solving (SAT)1. Experimental results show that our approach successfully alleviates the limitations of existing GNNs when encoding unattributed graphs, where the number of errors drops by 39% in the MIS problem and 76% in the SAT problem.
Researcher Affiliation Academia 1 Key Laboratory of High Confidence Software Technologies, Mo E; School of Computer Science, Peking University, 100871, P. R. China 2 Department of Computing Science, Alberta Machine Intelligent Institute (Amii) University of Alberta, Edmonton T6G 2E8, Canada
Pseudocode No The paper describes its methods verbally and mathematically but does not include any pseudocode or algorithm blocks.
Open Source Code Yes The code and data are available at https://github.com/zysszy/Preferential-Labeling
Open Datasets Yes We follow the data synthesis process in previous work (Li, Chen, and Koltun 2018) and generate 173,751, 20,000, 20,000 graphs for training, development, and test, respectively. We used the SAT dataset in Zhang et al. (2020).
Dataset Splits Yes Dataset. We follow the data synthesis process in previous work (Li, Chen, and Koltun 2018) and generate 173,751, 20,000, 20,000 graphs for training, development, and test, respectively. The training and development sets contain 500K and 396K SAT formulas, respectively.
Hardware Specification Yes For training, we use Adam (Kingma and Ba 2015) to train the model with learning rate 10 4 on a single Titan RTX.
Software Dependencies No The paper mentions using specific GNN models and optimizers (GCN, NLocal SAT, Adam) but does not provide version numbers for any software dependencies or libraries (e.g., Python, PyTorch, TensorFlow, CUDA).
Experiment Setup Yes The model contains 20 graph convolutional layers, which are regularized by dropout with rate of 0.1. For the hidden size of all layers used in this model, we set it to 128. For training, we use Adam (Kingma and Ba 2015) to train the model with learning rate 10 4 on a single Titan RTX.