Coupon Design in Advertising Systems

Authors: Weiran Shen, Pingzhong Tang, Xun Wang, Yadong Xu, Xiwang Yang5717-5725

AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Finally, extensive experiments are conducted to demonstrate the effectiveness of our algorithms based on both synthetic data and industrial data.
Researcher Affiliation Collaboration Weiran Shen,1 Pingzhong Tang, 2 Xun Wang, 2 Yadong Xu, 2 Xiwang Yang 3 1 Renmin University of China 2 Tsinghua University 3 Byte Dance
Pseudocode Yes Algorithm 1 Constructing m sub-VCG auctions; Algorithm 2 Algorithm for the no-feature case; Algorithm 3 Algorithm for the general case
Open Source Code No The paper does not provide any specific links or explicit statements about releasing the source code for their methodology.
Open Datasets No The paper states: "We use both synthetic data and industrial data to demonstrate the results of our experiments. As for synthetic data, we choose three different types of distribution to sample the value data... As for industrial data, it comes from one of the biggest short-form mobile video community in the world." However, it does not provide concrete access information (link, DOI, citation) for either the synthetic data generated or the industrial data, implying the latter is proprietary.
Dataset Splits No The paper states: "Then we run different algorithms on the training data and calculate ρa on the testing data. This procedure is repeated for 20 times..." It specifies a training and testing split (training data makes up about 70%) but does not mention a separate validation split for hyperparameter tuning or early stopping.
Hardware Specification No The paper does not specify the hardware (e.g., GPU/CPU models, memory) used for running the experiments. It only mentions that Algorithm 3 is implemented using Gurobi 9.0.
Software Dependencies Yes Algorithm 3 is implemented using Gurobi 9.0 (Gurobi Optimization 2021).
Experiment Setup Yes We can see that Alg-3 always yields better performance than Alg-2 since it can utilize features. Algorithms may not converge within 20 iterations when λ is small, i.e., λ 0.1 in Alg-2 and λ = 0.01 in Alg-3. As for Alg-2, larger λ can achieve better performance, thus we choose λ = 0.5 in Alg-2 to guarantee convergence and achieve better performance. While for Alg-3, although larger λ can have higher revenue in some iterations, it is unstable during the training phase. Hence λ is set to 0.05 in Alg-3 to maintain robustness and obtain comparable performance. Besides, in the remaining experiments, we use ϵ = 0.001 and Kout = 20 in both algorithms. We choose c0 = 10 so that = 1.