Graph Edit Networks

Authors: Benjamin Paassen, Daniele Grattarola, Daniele Zambon, Cesare Alippi, Barbara Hammer

ICLR 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In addition to this core theoretical contribution, we provide a proof-of-concept of our model by demonstrating that GENs can learn a variety of dynamical systems on graphs which are more difficult to handle for baseline systems from the literature. We also show that the sparsity of edits enables GENs to scale up to realistic graphs with thousands of nodes.
Researcher Affiliation Academia Benjamin Paassen The University of Sydney benjamin.paassen@sydney.edu.au Daniele Grattarola Università della Svizzera italiana daniele.grattarola@usi.ch Daniele Zambon Università della Svizzera italiana daniele.zambon@usi.ch Cesare Alippi Università della Svizzera italiana Politecnico di Milano cesare.alippi@usi.ch Barbara Hammer Bielefeld University bhammer@techfak.uni-bielefeld.de
Pseudocode Yes Algorithm 1 The scheme to translate the outputs of the GEN layer νi, yi, e+ i , e i , and ϵi,j to graph edits.
Open Source Code Yes All experimental code is available at https://gitlab.com/bpaassen/ graph-edit-networks.
Open Datasets Yes We evaluated the runtime of GENs on a variation of the HEP-Th paper dataset of Leskovec et al. (2007).
Dataset Splits Yes To obtain statistics we performed a 5-fold crossvalidation on all datasets.
Hardware Specification Yes We perform all experiments on a consumer grade laptop with core i7 CPU.
Software Dependencies No The paper mentions 'pyTorch' but does not specify its version or any other software dependencies with version numbers.
Experiment Setup Yes For all models, we use two graph neural network layers with 64 neurons each, sum as aggregation function and concatenation as merge function (refer to Equation 1). We train all networks with an Adam optimizer in py Torch using a learning rate of 10^-3 and stopping training after 30,000 time series or if the loss dropped below 10^-3.