Stabilizing and Enhancing Link Prediction through Deepened Graph Auto-Encoders
Authors: Xinxing wu, Qiang Cheng
IJCAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Empirically, extensive experiments on various datasets demonstrate the competitive performance of our proposed approach. Theoretically, we prove that our deep extensions can inclusively express multiple polynomial filters with different orders. |
| Researcher Affiliation | Academia | University of Kentucky, Lexington, Kentucky, U.S.A. xinxingwu@gmail.com, qiang.cheng@uky.edu |
| Pseudocode | No | The paper describes the models using mathematical equations but does not include any pseudocode or algorithm blocks. |
| Open Source Code | Yes | The codes of this paper are available at https://github.com/xinxingwu-uk/DGAE. |
| Open Datasets | Yes | Firstly, we employ three standard benchmark datasets, i.e., Cora, Citeseer, and Pubmed; then, we also evaluate our deep extensions on three webpage-related datasets. We summarize the data statistics in Table 1. |
| Dataset Splits | Yes | We train all the models by randomly removing 15% of links while keeping all node features, and the validation and test sets are formed by a ratio of 5:10 from the removed edges and the corresponding node pairs; the models weights are initialized by using the Glorot uniform technique. For all experiments, the obtained mean results and standard deviations are for 10 runs over 10 different random train/validation/ test splits of datasets. |
| Hardware Specification | No | The paper does not specify the hardware used for running the experiments (e.g., CPU, GPU models, memory, or cloud instances). |
| Software Dependencies | No | The paper mentions using the Adam optimizer and t-SNE but does not provide specific version numbers for any software libraries or frameworks used in the implementation. |
| Experiment Setup | Yes | In all experiments, we set the maximum number of epochs to 200 and adopt the Adam optimizer with an initial learning rate of 0.01. For simplicity, we construct our deep encoders with (k 1) 32-neuron hidden layers for k in (5) or (6) and a 16-neuron latent embedding layer. Besides, we perform a grid search in {0.000001, 0.000005, 0.00001, 0.00005, 0.0001, 0.0005, 0.001, 0.005, 0.01, 0.05, 0.1, 0.5, 1, 1.5, 2} to tune hyper-parameters for our models according to the performance on the validation set. |