All in a Row: Compressed Convolution Networks for Graphs

Authors: Junshu Sun, Shuhui Wang, Xinzhe Han, Zhe Xue, Qingming Huang

ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We validate Co CN on several node classification and graph classification benchmarks. Co CN achieves superior performance over competitive convolutional GNNs and graph pooling models. We empirically evaluate Co CN on real-world benchmarks.
Researcher Affiliation Academia 1Key Laboratory of Intelligent Information Processing, Institute of Computing Technology, Chinese Academy of Sciences, China 2School of Computer Science and Technology, University of Chinese Academy of Sciences, China 3Peng Cheng Laboratory, China 4Beijing Key Laboratory of Intelligent Telecommunication Software and Multimedia, School of Computer Science, Beijing University of Posts and Telecommunications, China.
Pseudocode Yes We use Py Torch-style pseudo-code to formulate the update equation as E(l) = Tri(E(l 1), k) = triu(E(l 1), k)+tril(E(l 1), k), where k denotes the kernel size, triu( , i) denotes the upper triangular matrix with i diagonals above the main diagonal and tril( , i) denotes the lower triangular matrix with i diagonals below the main diagonal.
Open Source Code Yes Codes are available at https://github.com/sunjss/Co CN.
Open Datasets Yes For graph classification, we use six datasets (Morris et al., 2020) including three biochemical datasets MUTAG, PROTEINS, NCI1 and three social network datasets COLLAB, IMDB-BINARY and IMDB-MULTI. For node classification, we conduct experiments on six datasets including Chameleon, Squirrel (Rozemberczki et al., 2021), Cornell, Texas, Wisconsin (Pei et al., 2020) and Actor (Tang et al., 2009).
Dataset Splits Yes For Chameleon, Squirrel, Cornell, Texas and Wisconsin, we use the same random train/validation/test splits of 48%/32%/20% as Pei et al.2 and report average performance over ten splits. We also use early stopping regularization, where we stop the training if there is no further reduction in the validation loss during 100 epochs.
Hardware Specification Yes Co CN is trained on a single Nvidia Geforce RTX 3090.
Software Dependencies No The paper mentions 'Py Torch (Paszke et al., 2019) and Py Torch-Geometric (Fey & Lenssen, 2019)' but does not specify their version numbers.
Experiment Setup Yes For experiments on all datasets, the learning rate is set to 1e-4, the hidden size is set to 64, and the dropout rate is set to 0.5. For Adam (Kingma & Ba, 2015) optimizer, weight decay is set {1e-4, 5e-4, 1e-3, 1e-2, 1e-1}. We also use early stopping regularization, where we stop the training if there is no further reduction in the validation loss during 100 epochs. The maximum epoch number is set to 200. The batch size is set to 4 on MUTAG and PROTEINS, 8 on NCI1, IMDB-BINARY and IMDB-MULTI, and 32 on COLLAB.