Estimating Propensity for Causality-based Recommendation without Exposure Data

Authors: Zhongzhou Liu, Yuan Fang, Min Wu

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Finally, we empirically evaluate PROPCARE through both quantitative and qualitative experiments.
Researcher Affiliation Academia Zhongzhou Liu School of Computing and Information Systems Singapore Management University Singapore, 178902 zzliu.2020@phdcs.smu.edu.sg Yuan Fang School of Computing and Information Systems Singapore Management University Singapore, 178902 yfang@smu.edu.sg Min Wu Institute for Infocomm Research A*STAR Singapore, 138632 wumin@i2r.a-star.edu.sg
Pseudocode Yes Algorithm 1: Training PROPCARE Input: Observed training interaction data D. Output: Model parameters Θ.
Open Source Code No The paper does not provide a direct link or explicit statement for the open-sourcing of the code for PROPCARE, its main contribution. It only mentions using and implementing baselines, some of which have links to their respective codebases.
Open Datasets Yes We employ three standard causality-based recommendation benchmarks. Among them, DH_original and DH_personalized are two versions of the Dunn Humby dataset [30]... The third dataset is Movie Lens 100K (ML) [29]... The raw data are available at https://www.dunnhumby.com/careers/engineering/sourcefiles. ... The raw data are available at https://grouplens.org/datasets/movielens.
Dataset Splits Yes On each dataset, we generate the training/validation/test sets following their original work [30, 29], respectively. ... For the DH datasets, the data generation process is repeated 10 times to simulate the 10-week training data, once more to simulate the 1-week validation data, and 10 more times to simulate the 10-week testing data.
Hardware Specification Yes All experiments were conducted on a Linux server with a AMD EPYC 7742 64-Core CPU, 512 GB DDR4 memory and four RTX 3090 GPUs.
Software Dependencies Yes We implement PROPCARE using TensorFlow 2.11 in Python 3.10.
Experiment Setup Yes Specifically, in PROPCARE, the trade-off parameter λ and µ are set to 10 and 0.4, respectively, on all datasets. ... where the threshold ϵ is set to 0.2 for DH_original and DH_personalized, and 0.15 for ML. ... For DH_original and DH_personalized, the scaling factor c is set to 0.8, while for ML it is set to 0.2. ... The embedding model fe takes (xu||xi) as input and is implemented as an MLP with 256, 128 and 64 neurons for its layers. fp and fr are both implemented as MLPs with 64, 32, 16, 8 neurons for the hidden layers and an output layer activated by the sigmoid function. ... PROPCARE is trained with a stochastic gradient descent optimizer using mini-batches, with a batch size set to 5096.