APG: Adaptive Parameter Generation Network for Click-Through Rate Prediction

Authors: Bencheng Yan, Pengjie Wang, Kai Zhang, Feng Li, Hongbo Deng, Jian Xu, Bo Zheng

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experimental evaluation results show that APG can be applied to a variety of deep CTR models and significantly improve their performance. Meanwhile, APG can reduce the time cost by 38.7% and memory usage by 96.6% compared to a regular deep CTR model. We have deployed APG in the industrial sponsored search system and achieved 3% CTR gain and 1% RPM gain respectively. ... 4 Experiments ... Datasets. Four real-world datasets are used including Amazon, Movie Lens, IAAC, and Indus Data. ... Baselines. Here, we compare our method with two kinds of methods ... 4.2 Performance Evaluation with Existing Deep CTR Models ... Table 2: The AUC (%) results of Click-Through Rate (CTR) prediction on different datasets.
Researcher Affiliation Industry Bencheng Yan , Pengjie Wang , Kai Zhang, Feng Li, Hongbo Deng, Jian Xu, Bo Zheng Alibaba Group China {bencheng.ybc,pengjie.wpj,victorlanger.zk, adam.lf,dhb167148,xiyu.xj,bozheng}@alibaba-inc.com
Pseudocode No The paper describes its methods using prose and mathematical equations but does not include any explicitly labeled pseudocode or algorithm blocks.
Open Source Code No Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)? [No] All data, instructions and hyper-parameters are explicitly written in the main paper and/or in the Suppl (see Appendix B). The code is currently proprietary.
Open Datasets Yes Datasets. Four real-world datasets are used including Amazon, Movie Lens, IAAC, and Indus Data. The first three are public datasets and the last is an industrial dataset. ... We provide the URLs of the public datasets
Dataset Splits Yes Did you specify all the training details (e.g., data splits, hyperparameters, how they were chosen)? [Yes] See Section 4.1 and Appendix B
Hardware Specification Yes Did you include the total amount of compute and the type of resources used (e.g., type of GPUs, internal cluster, or cloud provider)? [Yes] See Appendix B
Software Dependencies No The paper describes using neural networks and deep CTR models, implying dependencies on frameworks like TensorFlow or PyTorch, but it does not specify any software names with version numbers.
Experiment Setup Yes 4.1 Experimental Settings The detailed settings including datasets, baselines, and training details are presented in Appendix B. ... Did you specify all the training details (e.g., data splits, hyperparameters, how they were chosen)? [Yes] See Section 4.1 and Appendix B