Who to Invite Next? Predicting Invitees of Social Groups

Authors: Yu Han, Jie Tang

IJCAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Employing We Chat, the largest social messaging service in China, as the source for our experimental data, we develop a probabilistic graph model to capture the fundamental factors that determine the probability of a user to be invited to a specific social group. Our results show that the proposed model indeed lead to statistically significant prediction improvements over several state-of-the-art baseline methods.
Researcher Affiliation Collaboration Yu Han and Jie Tang Department of Computer Science and Technology, Tsinghua University yuhanthu@126.com, jietang@tsinghua.edu.cn [...] The other authors include Hao Ye and Bo Chen from Tencent Inc.
Pseudocode No The paper describes the model and learning process but does not include any pseudocode or algorithm blocks.
Open Source Code No The paper does not mention releasing any open-source code or provide a link to a code repository for its methodology.
Open Datasets No All the research work in this paper is based on the daily usage logs from We Chat messaging platform, which is one of the largest standalone messaging services, having over a billion created accounts and 938 million active users as of 2017.We collect all the valid chat groups with names created during half an hour. We only use non privacy data such as network structural information for research.
Dataset Splits No The paper mentions predicting at a specific time stamp based on previous time intervals but does not provide explicit training, validation, or test dataset splits (e.g., percentages or sample counts).
Hardware Specification No The paper does not provide any specific details about the hardware (e.g., CPU, GPU models, memory, cloud instances) used for running the experiments.
Software Dependencies No The paper does not list any specific software dependencies with version numbers required to reproduce the experiments.
Experiment Setup No The paper describes the model learning process (e.g., gradient descent, LBP) but does not provide specific experimental setup details such as hyperparameter values (e.g., learning rate, batch size, epochs) or other training configurations.