Gaussian-Induced Convolution for Graphs

Authors: Jiatao Jiang, Zhen Cui, Chunyan Xu, Jian Yang4007-4014

AAAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conduct our multi-layer graph convolution network on several public datasets of graph classification. The extensive experiments demonstrate that our GIC is effective and can achieve the state-of-the-art results.
Researcher Affiliation Academia Jiatao Jiang, Zhen Cui, Chunyan Xu, Jian Yang PCA Lab, Key Lab of Intelligent Perception and Systems for High-Dimensional Information of Ministry of Education, and Jiangsu Key Lab of Image and Video Understanding for Social Security, School of Computer Science and Engineering, Nanjing University of Science and Technology, Nanjing, China {jiatao, zhen.cui, cyx, csjyang}@njust.edu.cn
Pseudocode No The paper describes the model architecture and processes mathematically and in text, but it does not include any explicit pseudocode or algorithm blocks.
Open Source Code No The paper does not provide any statement about releasing source code or a link to a code repository for their methodology.
Open Datasets Yes We use two types of datasets: Bioinformatics and Network datasets. The former contains MUTAG (Debnath et al. 1991), PTC (Toivonen et al. 2003), NCI1 and NCI109 (Wale, Watson, and Karypis 2008), ENZYMES (Borgwardt et al. 2005) and PROTEINS (Borgwardt et al. 2005) and COLLAB (Leskovec, Kleinberg, and Faloutsos 2005), REDDIT-BINARY, REDDIT-MULTI-5K, REDDITMULTI-12K, IMDB-BINARY and IMDB-MULTI.
Dataset Splits Yes We perform 10-fold cross-validation, 9-fold for training and 1-fold for testing. The experiments are repeated 10 times and the average accuracies are reported.
Hardware Specification No The paper does not explicitly describe the specific hardware (e.g., CPU, GPU models) used for running its experiments. It mentions 'on a GPU' in the context of a general observation but not as a specification for their setup.
Software Dependencies No The paper does not specify version numbers for any software dependencies or libraries used in the implementation.
Experiment Setup Yes Its configuration can simply be set as C(64)-P(0.25)-C(128)P(0.25)-C(256)-P-FC(256)... The scale of respective field and the number of Gaussian components are both set to 7. We train GIC network with stochastic gradient descent for roughly 300 epochs with a batch size of 100, where the learning rate is 0.1 and the momentum is 0.95.