Social Influence Does Matter: User Action Prediction for In-Feed Advertising

Authors: Hongyang Wang, Qingfei Meng, Ju Fan, Yuchen Li, Laizhong Cui, Xiaoman Zhao, Chong Peng, Gong Chen, Xiaoyong Du246-253

AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conduct extensive experiments on real datasets from the commercial advertising platform of We Chat and a public dataset. The experimental results demonstrate that social influence learned by our approach can significantly boost performance of social action prediction.
Researcher Affiliation Collaboration 1Renmin University of China, 2Tencent, 3Singapore Management University, 4Shenzhen University
Pseudocode Yes Algorithm 1 GRAPHENCODE (G, h)
Open Source Code No The paper states that its main datasets (Wechat Day and Wechat Week) 'cannot be published for testing the reproducibility of our proposed approach' and does not provide any link or explicit statement about the release of its own source code.
Open Datasets Yes Thus, we also conduct experiments on an open dataset Weibo3 (Zhang et al. 2013), which is from the most popular Chinese microblogging platform.
Dataset Splits Yes All the ad exposure instances in the datasets are split into two parts, i.e., 70% for training and 30% for testing. We use cross validation over the training set to tune hyper-parameters.
Hardware Specification No The paper does not provide specific hardware details such as GPU or CPU models used for running its experiments.
Software Dependencies No The paper describes model architectures, hyperparameters, and optimizers (e.g., 'Adam optimizer'), but it does not specify software dependencies with version numbers (e.g., Python, PyTorch/TensorFlow versions, or other libraries).
Experiment Setup Yes Hyper-parameter settings. For LR, we add the L1 and the L2 regularization to prevent model over-fitting. The Deep FM model uses a two-layer neural network with 32 hidden units and an embedding size of 16. Batch normalization with decay 0.99 is also used for deep learning models. For WL-based graph encoding, we use r = 2 to generate neighborhoods and set the default dimension of x(s) u as 60. For GAT-based influence dynamics modeling, we use two GAT layers where each layer has 16 neurons. We use elu as activation function in GCN/GAT and Deep FM. All parameters are initialized using a random normalization. The models are trained using Adam optimizer with logloss function, learning rate 0.001 and mini-batch size 1024.