Percentile Risk-Constrained Budget Pacing for Guaranteed Display Advertising in Online Optimization
Authors: Liang Dai, Kejie Lyu, Chengcheng Zhang, Guangming Zhao, Zhonglin Zu, Liang Wang, Bo Zheng
AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | RCPacing s effectiveness is validated through offline evaluations and online A/B testing conducted on Taobao brand advertising platform. We implement RCPacing in our online display advertising system and conduct extensive online/offline experimental evaluations. The results demonstrate that RCPacing is highly effective in improving both the performance and smoothness of online delivery for GD campaigns. |
| Researcher Affiliation | Industry | 1Alibaba Group, Hangzhou, China 2Alibaba Group, Beijing, China |
| Pseudocode | Yes | Algorithm 1: Dual Mirror Descent Algorithm and Algorithm 2: RCPacing |
| Open Source Code | Yes | Dataset and the code for all methods are available in https://github.com/danifree/RCPacing. |
| Open Datasets | Yes | We construct a large-scale industrial dataset by collecting real-world ad-serving data from our display advertising system, which consists of 600K impressions and 300 GD ads. Dataset and the code for all methods are available in https://github.com/danifree/RCPacing. |
| Dataset Splits | No | The paper mentions using a 'large-scale industrial dataset' for 'offline evaluation' but does not specify any train/validation/test splits, percentages, or sample counts. |
| Hardware Specification | No | The paper describes the online display advertising system and experiments, but it does not provide specific details about the hardware specifications (e.g., GPU/CPU models, memory) used for running these experiments. |
| Software Dependencies | No | The paper mentions the use of 'deep neural networks (DNN)' and implies Python via the GitHub link, but it does not provide specific software dependencies with version numbers (e.g., PyTorch version, TensorFlow version, Python version, library versions). |
| Experiment Setup | Yes | Table 1 provides a summary of the optimal values for the important hyper-parameters: parameter value description ϵ 0.1 skew factor η 0.2 step size ˆα 0.05 static gradient clipping Pub 90% safe percentile upper bound WRglb 15% global win rate |