EvolveGCN: Evolving Graph Convolutional Networks for Dynamic Graphs

Authors: Aldo Pareja, Giacomo Domeniconi, Jie Chen, Tengfei Ma, Toyotaro Suzumura, Hiroki Kanezashi, Tim Kaler, Tao Schardl, Charles Leiserson5363-5370

AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate the proposed approach on tasks including link prediction, edge classification, and node classification. The experimental results indicate a generally higher performance of Evolve GCN compared with related approaches. ... In this section, we present a comprehensive set of experiments to demonstrate the effectiveness of Evolve GCN.
Researcher Affiliation Collaboration 1MIT-IBM Watson AI Lab, 2IBM Research, 3MIT CSAIL {Aldo.Pareja, Giacomo.Domeniconi1, Tengfei.Ma1}@ibm.com, {chenjie, tsuzumura, hirokik}@us.ibm.com, {tfk, neboat, cel}@mit.edu
Pseudocode Yes 1: function Ht = g(Xt, Ht 1) ... 1: function Zt = summarize(Xt, k) ... 1: function Ht = f(Xt)
Open Source Code Yes The code is available at https://github.com/IBM/Evolve GCN.
Open Datasets Yes Bitcoin OTC.1 (BC-OTC for short) BC-OTC is a who-trusts-whom network of bitcoin users trading on the platform http://www.bitcoin-otc.com. ... 1http://snap.stanford.edu/data/soc-sign-bitcoin-otc.html 2http://snap.stanford.edu/data/soc-sign-bitcoin-alpha.html 3http://konect.uni-koblenz.de/networks/opsahl-ucsocial 4http://snap.stanford.edu/data/as-733.html 5http://snap.stanford.edu/data/soc-Reddit Hyperlinks.html 6https://www.kaggle.com/ellipticco/elliptic-data-set
Dataset Splits Yes These data sets are summarized in Table 1. Training/validation/test splits are done along the temporal dimension. ... Table 1: Data sets. # Nodes # Edges # Time Steps (Train / Val / Test) SBM 1,000 4,870,863 35 / 5 / 10 BC-OTC 5,881 35,588 95 / 14 / 28 BC-Alpha 3,777 24,173 95 / 13 / 28 UCI 1,899 59,835 62 / 9 / 17 AS 6,474 13,895 70 / 10 / 20 Reddit 55,863 858,490 122 / 18 / 34 Elliptic 203,769 234,355 31 / 5 / 13
Hardware Specification No The paper does not provide specific hardware details (exact GPU/CPU models, processor types with speeds, memory amounts, or detailed computer specifications) used for running its experiments.
Software Dependencies No The paper does not provide specific ancillary software details (e.g., library or solver names with version numbers) needed to replicate the experiment.
Experiment Setup Yes Following convention, GCN has two layers and MLP has one layer. The embedding size of both GCN layers is set the same, to reduce the effort of hyperparameter tuning. The time window for sequence learning is 10 time steps, except for SBM and Elliptic, where it is 5.