Neural Link Prediction over Aligned Networks
Authors: Xuezhi Cao, Haokun Chen, Xuejian Wang, Weinan Zhang, Yong Yu
AAAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments demonstrate that MNN outperforms the state-of-the-art methods and achieves 3% to 5% relative improvement of AUC score across different settings, particularly over 8% for cold start scenarios. |
| Researcher Affiliation | Academia | Xuezhi Cao, Haokun Chen, Xuejian Wang, Weinan Zhang, Yong Yu APEX Data & Knowledge Management Lab Shanghai Jiao Tong University cxz,chenhaokun,xjwang,wnzhang,yyu@apex.sjtu.edu.cn |
| Pseudocode | No | The paper describes the network design with mathematical equations and text, but it does not include a clearly labeled pseudocode block or algorithm. |
| Open Source Code | Yes | The source code as well as the datasets are available online1. 1http://apex.sjtu.edu.cn/projects/34 |
| Open Datasets | Yes | We conduct experiments using two sets of aligned social networks, provided by (Cao and Yu 2016a). |
| Dataset Splits | No | The paper mentions '80% links are used for training' but does not specify a validation split or percentage. |
| Hardware Specification | No | The paper does not provide specific details about the hardware used for experiments, such as CPU or GPU models, or memory specifications. |
| Software Dependencies | No | The paper does not mention specific software dependencies with version numbers (e.g., Python 3.x, TensorFlow 2.x). |
| Experiment Setup | Yes | For our multi-neural-network model (MNN), we set the embedding dimension k = 80, sampling rate α = 100, weighting parameter β = 0.5 and the regularization term γ = 0.1. We design each neural network to have 2 hidden layers between the product layer and output layer, with width of 100 and 50 respectively. |