Adversarially Regularized Graph Autoencoder for Graph Embedding

Authors: Shirui Pan, Ruiqi Hu, Guodong Long, Jing Jiang, Lina Yao, Chengqi Zhang

IJCAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental studies on real-world graphs validate our design and demonstrate that our algorithms outperform baselines by a wide margin in link prediction, graph clustering, and graph visualization tasks.
Researcher Affiliation Academia 1 Centre for Artificial Intelligence, FEIT, University of Technology Sydney, Australia 2 School of Computer Science and Engineering, University of New South Wales, Australia
Pseudocode Yes Algorithm 1 Adversarially Regularized Graph Embedding
Open Source Code No The paper does not provide any explicit statement or link to open-source code for the described methodology.
Open Datasets Yes The benchmark graph datasets used in the paper are summarized in Table 1. Each data set consists of scientific publications as nodes and citation relationships as edges. The features are unique words in each document. Data Set # Nodes # Links # Content Words # Features Cora 2,708 5,429 3,880,564 1,433 Citeseer 3,327 4,732 12,274,336 3,703 Pub Med 19,717 44,338 9,858,500 500
Dataset Splits Yes Each dataset is separated into a training, testing set and validation set. The validation set contains 5% citation edges for hyperparameter optimization, the test set holds 10% citation edges to verify the performance, and the rest are used for training.
Hardware Specification No The acknowledgements section states: “We acknowledge the support of NVIDIA Corporation and Make Magic Australia with the donation of GPU used for this research.” However, it does not specify the model or type of GPU, or any other hardware components.
Software Dependencies No The paper mentions optimizers (Adam algorithm) and algorithms (K-means, t-SNE) but does not provide specific version numbers for any software libraries or dependencies used in the experiments.
Experiment Setup Yes For the Cora and Citeseer data sets, we train all autoencoder-related models for 200 iterations and optimize them with the Adam algorithm. Both learning rate and discriminator learning rate are set as 0.001. As the Pub Med data set is relatively large (around 20,000 nodes), we iterate 2,000 times for an adequate training with a 0.008 discriminator learning rate and 0.001 learning rate. We construct encoders with a 32-neuron hidden layer and a 16-neuron embedding layer for all the experiments and all the discriminators are built with two hidden layers(16-neuron, 64-neuron respectively).