Joint Link Prediction and Network Alignment via Cross-graph Embedding

Authors: Xingbo Du, Junchi Yan, Hongyuan Zha

IJCAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental By extensive experiments on public benchmarks, we show that link prediction and network alignment can benefit each other especially for improving the recall for both tasks.
Researcher Affiliation Academia 1School of Computer Science and Software Engineering, East China Normal University 2Department of Computer Science and Engineering & Mo E Key Lab of Artificial Intelligence, Shanghai Jiao Tong University 3KLATASDS-MOE, East China Normal University dxb630@126.com, yanjunchi@sjtu.edu.cn, zha@sei.ecnu.edu.cn
Pseudocode Yes Algorithm 1: Network Alignment (NA) [...] Algorithm 2: Cross-graph Link Prediction (LP) [...] Algorithm 3: Cross-graph Node Embedding for Joint Network Alignment and Link Prediction (CENALP)
Open Source Code No The paper does not provide concrete access to source code for the methodology described in this paper. No links or explicit statements about code availability are present.
Open Datasets Yes Popular datasets are used i.e. Twitter/Facebook, Douban (online and offline communities as China s popular social network), and the DBLP benchmark. [...] DBLP. It is collected by [Prado et al., 2013] [...] Facebook/Twitter. A cross-graph constructed from two real-world social networks as collected and published by [Cao and Yong, 2016]. [...] Douban online/offline. A real-world social network extracted from which is collected and published by [Zhong et al., 2012].
Dataset Splits No The paper mentions sampling for training links but does not provide specific dataset split information (exact percentages, sample counts, or detailed splitting methodology) for training, validation, and test sets. It states 'Randomly sample a group of existent links Eext, E ext and nonexistent links Emis, E mis in G and G respectively' for training the link prediction classifier, but not for the overall datasets.
Hardware Specification Yes Specifically, the task for DBLP and its disturbed copy with 2,151 nodes and 22 iterations can be finished in average 94 seconds per itera- tion. Douban online/offline with 1,118 nodes and 18 iterations spend in average 116 seconds per iteration and Facebook/Twitter with 1,043 nodes and 12 iterations spend in average 53 seconds per iteration on our desktop with 2.1GHz CPU and 16G memory.
Software Dependencies No The paper does not provide specific ancillary software details with version numbers. While it references models and architectures like 'Skip-gram model' and 'product layer', it does not list programming languages, libraries, or frameworks with their respective versions.
Experiment Setup Yes The parameters commonly used in the compared methods are set the same for a fair comparison. Specifically, the dimension of node embeddings, including Deep Walk, struc2vec and our proposed method, is universally set as 64. The maximum depth of neighbors to hop is set as K = 2. The parameter in Eq. 3 is set as α = 5. The probability controlling whether to switch networks is set as q = 0.3.