Contrastive Balancing Representation Learning for Heterogeneous Dose-Response Curves Estimation

Authors: Minqin Zhu, Anpeng Wu, Haoxuan Li, Ruoxuan Xiong, Bo Li, Xiaoqing Yang, Xuan Qin, Peng Zhen, Jiecheng Guo, Fei Wu, Kun Kuang

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments are conducted on synthetic and real-world datasets demonstrating that our proposal significantly outperforms previous methods. Experiments Since the true HDRC are rarely available in real application, in line with previous work (Nie et al. 2021; Bica et al. 2020), we simulate 4 synthetic data and 5 semi-synthetic data from two real-world datasets IHDP2 and News3.
Researcher Affiliation Collaboration 1 Department of Computer Science and Technology, Zhejiang University 2Mohamed bin Zayed University of Artificial Intelligence 3Center for Data Science, Peking University 4Department of Quantitative Theory and Methods, Emory University 5 School of Economics and Management, Tsinghua University 6Didi Chuxing {minqinzhu, anpwu, kunkuang}@zju.edu.cn, hxli@stu.pku.edu.cn, ruoxuan.xiong@emory.edu, libo@sem.tsinghua.edu.cn, {xiaoqingyang, xuanqin, zhenpeng, jiechengguo}@didiglobal.com, wufei@cs.zju.edu.cn
Pseudocode No The paper includes figures and mathematical formulations but no explicitly labeled pseudocode or algorithm blocks.
Open Source Code No The paper does not contain any explicit statements about releasing source code or links to a code repository for the described methodology.
Open Datasets Yes We simulate 4 synthetic data and 5 semi-synthetic data from two real-world datasets IHDP2 and News3. https://www.fredjo.com https://paperdatasets.s3.amazonaws.com/news.db
Dataset Splits Yes Then we sample 2100/600/300 units for training/validation/test for each data. We sample units from the IHDP data to create the training, validation, and test sets, with 522/150/75 units for each data split. For the News dataset, we perform data splits into training, validation, and test sets with 2100/600/300 units, respectively.
Hardware Specification No The paper does not provide specific details about the hardware (e.g., CPU, GPU models, memory, or cloud instance types) used for running the experiments.
Software Dependencies No The paper does not provide specific version numbers for any software dependencies, libraries, or frameworks used (e.g., 'PyTorch 1.x' or 'TensorFlow 2.x').
Experiment Setup No The paper discusses tuning some hyperparameters like alpha and the dimension of representation K_Phi(X), and mentions setting 'm=1 with default'. However, it does not provide a comprehensive list of all key hyperparameters (e.g., learning rate, batch size, optimizer, number of epochs) or other system-level training settings used for the main experimental results.