Graph Structure Learning on User Mobility Data for Social Relationship Inference

Authors: Guangming Qin, Lexue Song, Yanwei Yu, Chao Huang, Wenzhe Jia, Yuan Cao, Junyu Dong

AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on three real-world datasets demonstrate the superiority of SRINet against state-of-the-art techniques in inferring social relationships from user mobility data.
Researcher Affiliation Academia 1College of Computer Science and Technology, Ocean University of China 2Department of Data Science, Duke Kunshan University 3Department of Computer Science, University of Hong Kong
Pseudocode No The paper includes a block diagram (Figure 1) illustrating the framework, but it does not provide any sections or figures explicitly labeled as 'Pseudocode' or 'Algorithm', nor does it present structured steps in a code-like format.
Open Source Code Yes The source code of our method is available at https://github.com/qinguangming1999/SRINet.
Open Datasets Yes We use three publicly available real-world mobility datasets, i.e., Gowalla, Brightkite (Cho, Myers, and Leskovec 2011), and Foursquare (Yang et al. 2019), to evaluate the performance of models.
Dataset Splits Yes In our experiments, we use 25% friendships in social networks on each dataset as training set. Then we randomly sample another 5% friendship for validating, and use the remaining 70% for testing.
Hardware Specification No The paper does not provide specific details about the hardware used for experiments, such as GPU or CPU models, memory, or cloud computing specifications.
Software Dependencies No The paper mentions tuning parameters such as learning rate, dropout, and weight-decay, but it does not specify any software dependencies (e.g., libraries, frameworks, or programming languages) along with their version numbers.
Experiment Setup Yes For our SRINet, we set user embedding dimension d to 512 unless stated otherwise, the number of convolution layers L to 2, tune learning rate from 0.0001 to 0.01, dropout to 0.01, and weight-decay to 0.0001, use early stopping mechanism, and set patience to 10 to avoid overfitting. The coefficient ω is set to 0.003 for three datasets. To capture more potential meeting events, we set τ to 2 hours.