Predict Anchor Links across Social Networks via an Embedding Approach

Authors: Tong Man, Huawei Shen, Shenghua Liu, Xiaolong Jin, Xueqi Cheng

IJCAI 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Through extensive experiments on two realistic datasets, we demonstrate that PALE significantly outperforms the state-of-the-art methods.
Researcher Affiliation Academia 1{CAS Key Lab of Network Data Science and Technology Institute of Computing Technology, Chinese Academy of Sciences, China} 2{University of Chinese Academy of Sciences, China}
Pseudocode No The paper describes the model algorithmically and provides an illustrative diagram (Figure 1), but it does not include a formal pseudocode block or algorithm listing.
Open Source Code No The paper does not include any statement about releasing open-source code or a link to a code repository for the described methodology.
Open Datasets Yes The first dataset1 was crawled from Facebook and published in [Viswanath et al., 2009]. 1http://socialnetworks.mpi-sws.org/data-wosn2009.html. The second dataset used in this paper is a co-author network... extracted from the Microsoft Academic Graph (MAG) [Sinha et al., 2015]2. 2http://research.microsoft.com/en-us/projects/mag/
Dataset Splits Yes PALE (MLP): PALE model with the MLP being employed as the mapping function, where the dimension of the hidden layer is 2 d, the learning rate and the regularizing coefficient are chosen based on a 5-fold cross-validation.
Hardware Specification No The paper does not provide any specific details about the hardware used for running the experiments (e.g., CPU, GPU models, memory, or cloud instances).
Software Dependencies No The paper does not mention any specific software dependencies or their version numbers that are required to replicate the experiments.
Experiment Setup Yes To combat this problem, we propose a strategy to identify hidden edges with the help of the observed anchor links and the structure of the other network. ... Finally, we adopt stochastic gradient descent to learn the latent representations. ... In this paper, we consider both linear and non-linear mapping functions. ... For the linear mapping function, is a d d matrix... In addition, we employ Multi-Layer Perceptron (MLP) ... where the dimension of the hidden layer is 2 d, the learning rate and the regularizing coefficient are chosen based on a 5-fold cross-validation.