Relational Deep Learning: A Deep Latent Variable Model for Link Prediction
Authors: Hao Wang, Xingjian Shi, Dit-Yan Yeung
AAAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments on three real-world datasets show that RDL works surprisingly well and significantly outperforms the state of the art. |
| Researcher Affiliation | Academia | Hao Wang, Xingjian Shi, Dit-Yan Yeung Department of Computer Science and Engineering Hong Kong University of Science and Technology Clear Water Bay, Hong Kong |
| Pseudocode | No | The paper describes algorithms but does not present any formal pseudocode blocks or sections labeled "Algorithm". |
| Open Source Code | No | The paper does not provide any concrete access (link or explicit statement) to the source code for the methodology described. |
| Open Datasets | Yes | We use three datasets, two from Cite ULike1 and one from ar Xiv2, in our experiments. The first two datasets are from (Wang, Chen, and Li 2013). ... The last dataset, ar Xiv, is from the SNAP datasets (Leskovec and Krevl 2014). |
| Dataset Splits | Yes | In the experiments, we first use a validation set to find the optimal hyperparameters for CMF, RTM, g RTM, and RDL. ... We randomly select 80% of the nodes as the training set and use the rest as the test set. |
| Hardware Specification | No | The paper does not provide specific details about the hardware used for running the experiments (e.g., GPU/CPU models, memory). |
| Software Dependencies | No | The paper mentions using certain models and frameworks (e.g., SDAE, CNN, PyTorch) but does not specify version numbers for any software dependencies. |
| Experiment Setup | Yes | For CMF, we set the regularization hyperparameters for the latent factors of different contexts to 10. After the grid search, we find that CMF performs best when the weights for the adjacency matrix and content matrix (BOW) are 8 and 2 for all three datasets. We find that RTM and g RTM achieve the best performance when c = 12, α = 1, and the sampling ratio for unobserved links is set to 0.1%. For RDL we use the Gaussian feature generator distribution and network structures of B-K, B-100-K, and B-100-100-K. For all models we vary the representation dimensionality K from 5 to 50. |