CensNet: Convolution with Edge-Node Switching in Graph Neural Networks

Authors: Xiaodong Jiang, Pengsheng Ji, Sheng Li

IJCAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results on real-world academic citation networks and quantum chemistry graphs show that our approach has achieved or matched the state-of-the-art performance. We evaluate the proposed Cens Net method for three tasks on five benchmark data sets.
Researcher Affiliation Academia Xiaodong Jiang1,2 , Pengsheng Ji2 and Sheng Li1 1Department of Computer Science, University of Georgia, Athens, GA, USA 2Department of Statistics, University of Georgia, Athens, GA, USA {xiaodong, psji, sheng.li}@uga.edu
Pseudocode Yes Algorithm 1 Cens Net for Node Classification. Algorithm 2 Mini-batch Cens Net.
Open Source Code No The paper does not provide a specific link or explicit statement about the availability of the source code for the Cens Net methodology.
Open Datasets Yes Tox21. The Toxicology in the 21st Century (Tox21, [Wu et al., 2018]) initiative created a public database measuring toxicity of compounds, which has been used in the 2014 Tox21 Data Challenge. ... Lipophilicity. The Lipophilicity is an important feature of drug molecules that affects both membrane permeability and solubility [Wu et al., 2018]. ... Cora, Citeseer, and Pub Med. These three data sets are benchmarks for the semi-supervised node classification task, which has been analyzed by many graph convolutional network models such as the ones in [Defferrard et al., 2016a; Kipf and Welling, 2017; Hamilton et al., 2017; Veliˇckovi c et al., 2018; Liao et al., 2019]. Cora has 2,708 nodes (papers) and 5,429 edges (citation links)... Citeseer has 3,327 nodes and 4,732 edges... Pub Med contains 19,717 nodes and 44,338 edges... [Sen et al., 2008], and Pub Med [Namata et al., 2012]
Dataset Splits Yes We then randomly split the dataset to different partitions as training, validation and test sets, respectively. We consider 4 data splitting settings, by keeping 60%, 70%, 80%, and 90% of the molecule graphs as the training set, while equally breaking the rest of the data sets as validation and test sets. For all three data sets, we randomly select 50% for validation and the rest for testing.
Hardware Specification Yes All experiments are conducted on an Azure Linux VM (CPU: Intel(R) Xeon(R) CPU E5-2690 v3, GPU: NVIDIA Tesla K80).
Software Dependencies Yes We implemented all graph convolution network algorithms in Py Torch [Paszke et al., 2017] v1.0.0. For other classical algorithms (random forest, linear regression, logistic regression), we used the implementations in the Python package Scikit-learn [Pedregosa et al., 2011].
Experiment Setup Yes For graph convolution models, we didn t implement any sophisticated fine-tuning strategies but tried different settings of learning rate from {0.01, 0.005, 0.001, 0.0005}, batch size {16, 32, 64, 128, 256}, number of epochs {200, 300, 500, 1000}, etc. We implemented three layers architecture (Node Layer Edge Layer Node Layer) with 32 units in each hidden layer.