DepthLGP: Learning Embeddings of Out-of-Sample Nodes in Dynamic Networks

Authors: Jianxin Ma, Peng Cui, Wenwu Zhu

AAAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on real-world networks. Empirical results demonstrate that our approach can achieve significant performance gain over existing approaches.
Researcher Affiliation Academia Jianxin Ma, Peng Cui, Wenwu Zhu Department of Computer Science and Technology, Tsinghua University, China majx13fromthu@gmail.com, cuip@tsinghua.edu.cn, wwzhu@tsinghua.edu.cn
Pseudocode Yes Algorithm 1 The Prediction Routine; Algorithm 2 The Training Routine
Open Source Code No No explicit statement or link for open-source code is provided for the methodology described in this paper.
Open Datasets Yes DBLP: We extract a co-author network from dblp.org... PPI (Breitkreutz et al. 2008)... Blog Catalog (Tang and Liu 2009)
Dataset Splits No The paper describes a training procedure that samples subgraphs and treats some nodes as out-of-sample for empirical risk minimization, but it does not specify explicit train/validation/test dataset splits for the overall datasets (DBLP, PPI, Blog Catalog) for hyperparameter tuning or model selection.
Hardware Specification No No specific hardware models (e.g., GPU/CPU types) are mentioned for the experimental setup.
Software Dependencies No The paper does not provide specific version numbers for software dependencies or libraries (e.g., Python, PyTorch, TensorFlow versions).
Experiment Setup Yes We use g(x) = x + g(x) for Depth LGP, where g(x) is a neural network with a single hidden layer of 64 units. We use Leaky Re LU as the activation function. ... We set the number of seed nodes to be four when sampling a subgraph for training.