Backdoor Adjustment via Group Adaptation for Debiased Coupon Recommendations

Authors: Junpeng Fang, Gongduo Zhang, Qing Cui, Caizhi Tang, Lihong Gu, Longfei Li, Jinjie Gu, Jun Zhou

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conduct comprehensive offline and online experiments to demonstrate the efficacy of our proposed paradigm.
Researcher Affiliation Industry Junpeng Fang, Gongduo Zhang, Qing Cui, Caizhi Tang, Lihong Gu, Longfei Li, Jinjie Gu, Jun Zhou* Ant Group, Hangzhou, China, {junpeng.fjp,gongduo.zgd,cuiqing.cq,caizhi.tcz,lihong.glh,longyao.llf,jinjie.gujj}@ant.com; jun.zhoujun@antfin.com
Pseudocode No The paper includes architectural diagrams (e.g., Figure 3) but no explicit pseudocode blocks or algorithms.
Open Source Code No The paper does not provide an explicit statement or link for open-source code for the methodology described.
Open Datasets Yes Ds A Amazon Dataset (Bengio, Ducharme, and Vincent 2000) contains transaction information from Amazon. Ds B Ele.me Data (Tianchi 2022) is a set of recommendation-related data provided by Ele.me. Ds C. Movie Lens data (Harper and Konstan 2015) is a rating (a continuous value ranging from 0 to 5) data set collected from Movie Lens website. Ds D. Alibaba Ads Click Dataset (Tianchi 2018) is collected from the online advertising system in Alibaba.
Dataset Splits Yes Five-fold cross-validation is conducted, and the mean metric values and standard deviations for each approach are recorded.
Hardware Specification Yes We implement it with Tensorflow (Abadi et al. 2015) and use Adam (Kingma and Ba 2014) optimizer with the default setting to train the model on a PC with Intel i7 six cores 2.6GHz CPU and 16GB memory, and the operating environment is Python 3.7.12.
Software Dependencies Yes We implement it with Tensorflow (Abadi et al. 2015) and use Adam (Kingma and Ba 2014) optimizer with the default setting to train the model on a PC with Intel i7 six cores 2.6GHz CPU and 16GB memory, and the operating environment is Python 3.7.12.
Experiment Setup No The paper mentions network layer sizes and the optimizer used ('Adam with the default setting') but does not provide specific hyperparameter values such as learning rate, batch size, or number of epochs, which are crucial for reproducing the experimental setup.