Directed Graph Auto-Encoders

Authors: Georgios Kollias, Vasileios Kalantzis, Tsuyoshi Ide, Aurélie Lozano, Naoki Abe7211-7219

AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We demonstrate the ability of the proposed model to learn meaningful latent embeddings and achieve superior performance on the directed link prediction task on several popular network datasets. Experiments In this section we demonstrate the performance of the proposed approach on the directed link prediction task associated with two different datasets: (a) namely Cora ML (2,995 nodes, 8,416 edges, 2,879 features), and (b) Cite Seer (3,312 nodes, 4,715 edges, 3,703 features). Results and Discussion We compare the performance of our Di GAE models to Standard GAE in (Kipf and Welling 2016b), Source/Target (S/T) GAE and Gravity GAE both in (Salha et al. 2019).
Researcher Affiliation Industry Georgios Kollias, Vasileios Kalantzis, Tsuyoshi Id e, Aur elie Lozano, Naoki Abe IBM Research T. J. Watson Research Center {gkollias, vkal, tide, aclozano, nabe}@us.ibm.com
Pseudocode No The paper does not contain any structured pseudocode or algorithm blocks.
Open Source Code Yes The code is available at https://github.com/gidiko/Di GAE.
Open Datasets Yes In this section we demonstrate the performance of the proposed approach on the directed link prediction task associated with two different datasets: (a) namely Cora ML (2,995 nodes, 8,416 edges, 2,879 features), and (b) Cite Seer (3,312 nodes, 4,715 edges, 3,703 features).
Dataset Splits Yes We randomly remove 15% of the directed edges from the graph and train models on the remaining edge set. Two thirds of the removed edges (i.e. 10% of all input graph edges) are used as actual edges for testing, one third of them (i.e. 5% of all input graph edges) as actual edges for validation.
Hardware Specification Yes We ran experiments for all models (ours and baselines) on a system equipped with an Intel(R) Core(TM) i7-8850H CPU @2.60GHz (6 cores/12 threads) and 32 GB of DDR4 memory @2400 MHz.
Software Dependencies No We use Python and especially the Py Torch library and Py Torch Geometric (Fey and Lenssen 2019), which is a geometric deep learning extension library. However, specific version numbers for these software components are not provided.
Experiment Setup Yes We employ grid search for hyperparameter tuning: learning rate η {0.005, 0.01}, hidden layer dimension d {32, 64} with d/2 for the latent space dimension, (α, β) {0.0, 0.2, 0.4, 0.6, 0.8}2 for Di GAE models and parameter λ {0.1, 1.0, 10.0} for Gravity GAE. For the final models we select hyperparameter values that maximize mean AUC computed on the validation set. In all cases, models are trained for 200 epochs, using Adam optimizer, without dropout, performing full-batch gradient descent.