Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

Rethinking Causal Ranking: A Balanced Perspective on Uplift Model Evaluation

Authors: Minqin Zhu, Zexu Sun, Ruoxuan Xiong, Anpeng Wu, Baohong Li, Caizhi Tang, Jun Zhou, Fei Wu, Kun Kuang

ICML 2025 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments on both simulated and real-world datasets demonstrate that the PUC provides less biased evaluations, while PTONet outperforms existing methods. We conduct extensive experiments on simulated data, real-world Criteo data, and real-world Lazada data to demonstrate the superior performance of the Principled Uplift Curve and PTONet.
Researcher Affiliation Collaboration 1College of Computer Science and Technology, Zhejiang University, China 2Gaoling School of Artificial Intelligence, Renmin University of China, China 3Department of Quantitative Theory and Methods, Emory University, USA 4Ant Group, Zhejiang, China.
Pseudocode No The paper describes the architecture of PTONet in Figure 4 and details its components, but does not provide structured pseudocode or algorithm blocks for any of the methods.
Open Source Code Yes The source code is available at: https://github.com/euzmin/PUC.
Open Datasets Yes The real-world Criteo dataset (Diemert Eustache et al., 2018; Diemert et al., 2021), open sourced by Criteo AI Labs, is utilized for uplift modeling in a large-scale advertising scenario. The real-world Lazada data is large-scale production dataset from the real voucher distribution business scenario in Lazada, a leading South-East Asia (SEA) E-commerce platform of Alibaba Group. Our data processing follows Zhong et al. (2022).
Dataset Splits Yes We split the two datasets into training, validation, and test sets in an 8/1/1 ratio.
Hardware Specification Yes All experiments are conducted on an Intel(R) Xeon(R) Gold 6240 CPU @ 2.60GHz.
Software Dependencies No The paper mentions using the "scikit-uplift" package and the "kendalltau" function from the "scipy.stats" module, but does not provide specific version numbers for these software dependencies.
Experiment Setup Yes Hyperparameters Tuning. The range of values for hyperparameters shared by all methods is presented as follows: the representation dimension hdim {24, 25, 26}, the batch size bs {28, 29, 210, 211} and the learning rate lr {1e 4, 1e 3, 1e 2, 1e 1}. Furthermore, for the hyperparameters α in CFRNet, Dragon Net, DESCN, and PTONet, β in Dragon Net and PTONet, and β0, β1, γ0, γ1 in DESCN, they are all confined to the range of {0.1, 0.5, 1, 5, 10}. We utilize an Adam optimizer with a maximum of 20 epochs and employ joint Qini as the primary evaluation metric. We implement an early stopping mechanism with patience of 5 for all baselines, as suggested by (Liu et al., 2023).