Curvature Graph Network

Authors: Ze Ye, Kin Sum Liu, Tengfei Ma, Jie Gao, Chao Chen

ICLR 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental To further investigate how curvature information affects graph convolution, we carried out extensive experiments with various synthetic graphs and real-world graphs. Our synthetic graphs are generated according to various well-established graph models, e.g., stochastic block model (Decelle et al., 2011), Watts Strogatz network (Watts & Strogatz, 1998), Newman Watts network (Newman & Watts, 1999) and Kleinberg s navigable small world graph (Kleinberg, 2000). On these data, Curv GN outperforms vanilla graph network and networks using node degree information and self attention, demonstrating the benefit of curvature information in graph convolution.
Researcher Affiliation Collaboration Ze Ye Department of Biomedical Informatics Stony Brook University ze.ye@stonybrook.edu Kin Sum Liu Department of Computer Science Stony Brook University kiliu@cs.stonybrook.edu Tengfei Ma IBM Research AI Tengfei.Ma1@ibm.com Jie Gao Department of Computer Science Rutgers University jg1555@rutgers.edu Chao Chen Department of Biomedical Informatics Stony Brook University chao.chen.1@stonybrook.edu
Pseudocode No The paper does not contain any clearly labeled 'Pseudocode' or 'Algorithm' blocks.
Open Source Code No The paper does not include an explicit statement about releasing its source code or provide a link to a code repository for the methodology described.
Open Datasets Yes We use three popular citation network benchmark datasets: Cora, citetseer and Pub Med (Sen et al., 2008). We also use four extra datasets: Coauthor CS and Coauthor Physics which are co-authorship graphs based on the Microsoft Academic Graph from the KDD Cup 2016 challenge; Amazon Computers and Amazon Photos which are segments of the Amazon co-purchase graph in Mc Auley et al. (2015).
Dataset Splits Yes For each generated graph, we randomly select 400 nodes as the training set, another 400 nodes as the validation set and the remaining 200 nodes as the test set.
Hardware Specification Yes In Table 3, we show the computation time for the curvatures with two 18-core CPUs.
Software Dependencies No The paper mentions using 'ECOS' as an interior point solver but does not provide a specific version number for it or any other software dependencies.
Experiment Setup Yes For synthetic experiments, the hidden layer output is reduced to 8 dimensions. During the training stage, we set L2 regularization with λ = 0.0005 for all datasets. Also, all the models are initialized by Glorot initialization and trained by minimizing cross-entropy loss using Adam SGD optimizer with learning rate 0.005. We apply an early stopping strategy based on the validation set s accuracy with a patience of 100 epochs.