Causal Inference via Sparse Additive Models with Application to Online Advertising

Authors: Wei Sun, Pengyuan Wang, Dawei Yin, Jian Yang, Yi Chang

AAAI 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental To demonstrate the efficacy of our approach, we apply it to a real online advertising campaign to evaluate the impact of three ad treatments: ad frequency, ad channel, and ad size. We show that the ad frequency usually has a treatment effect cap when ads are showing on mobile device. In addition, the strategies for choosing best ad size are completely different for mobile ads and online ads.
Researcher Affiliation Collaboration 1Purdue University, West Lafayette, IN,USA, sun244@purdue.edu 2Yahoo Labs, Sunnyvale, CA, USA, {pengyuan, daweiy, jianyang,yichang}@yahoo-inc.com
Pseudocode Yes Table 1: Our Two-stage Algorithm Input: Yi, Xi, Ti for i = 1, 2, ...N. Output: Estimated treatment effect for t. Stage 1: Obtain the estimated propensity parameter bθ(Xi) by modeling Ti|Xi via SAM. Stage 2: Calculate the final treatment effect by modeling Yi|Ti, bθ(Xi) via GAM as in (10).
Open Source Code No The paper does not provide any specific links or statements about the availability of source code for the methodology.
Open Datasets No The reported dataset and results are deliberately incomplete and subject to anonymization, and thus do not necessarily reflect the real portfolio at any particular time.
Dataset Splits No The paper does not provide specific details on validation splits or cross-validation setup for reproducibility. It mentions a testing data for simulation, but not a distinct validation split.
Hardware Specification No The paper does not provide specific hardware details (e.g., CPU, GPU models, memory) used for running the experiments.
Software Dependencies No The paper mentions various algorithms (gbm, lasso, slogit, bagging, rf, SAM, GAM) but does not provide specific version numbers for any software dependencies.
Experiment Setup No The paper mentions parameters like “sample size N = 1000 and number of features p = 200” for simulations, and describes how the tuning parameter λ can be tuned, but does not provide concrete hyperparameter values or detailed training configurations (e.g., learning rate, batch size) for the models used in the experiments.