Little Is Much: Bridging Cross-Platform Behaviors through Overlapped Crowds

Authors: Meng Jiang, Peng Cui, Nicholas Jing Yuan, Xing Xie, Shiqiang Yang

AAAI 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments across two real social networks show that XPTRANS significantly outperforms the state-of-the-art.
Researcher Affiliation Collaboration Meng Jiang, Peng Cui Tsinghua University Nicholas Jing Yuan, Xing Xie Microsoft Research Asia Shiqiang Yang Tsinghua University
Pseudocode Yes Algorithm 1 XPTRANS: Semi-supervised transfer learning for cross-platform behavior prediction
Open Source Code No No explicit statement or link providing concrete access to the source code for the methodology described in this paper was found.
Open Datasets No We use the Sina Weibo (tag, tweet entity) and Douban (book, movie, music) data sets in our experiments. We identified the overlapped users with their log-in accounts. Table 1 lists the data statistics.
Dataset Splits No We set the percentage of training behavioral entries in R(P ) by non-overlapping users as 70%, the percentage of auxiliary behavioral entries in R(Q) by non-overlapping users as 70%, and the other two parameters: α(P Q) R [0, 100%]: the percentage of overlapping behavioral entries in R(P ) and R(Q); α(P Q) U [0, 100%]: the percentage of the most active overlapping users in R(P ) and R(Q).
Hardware Specification No No specific hardware details (e.g., exact GPU/CPU models, memory amounts, or detailed computer specifications) used for running experiments were mentioned.
Software Dependencies No No specific software dependencies with version numbers (e.g., library or solver names with version numbers) were mentioned.
Experiment Setup No The paper defines parameters like λ and μ in the objective function, but it does not specify concrete numerical values for these hyperparameters or other training configurations like learning rates, batch sizes, or epochs.