Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

Long-Term Individual Causal Effect Estimation via Identifiable Latent Representation Learning

Authors: Ruichu Cai, Junjie Wan, Weilin Chen, Zeqin Yang, Zijian Li, Peng Zhen, Jiecheng Guo

IJCAI 2025 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experimental studies, conducted on multiple synthetic and semi-synthetic datasets, demonstrate the effectiveness of our proposed method.
Researcher Affiliation Collaboration 1School of Computer Science, Guangdong University of Technology, Guangzhou, China 2Peng Cheng Laboratory, Shenzhen, China 3Mohamed bin Zayed University of Artificial Intelligence, Masdar City, Abu Dhabi 4Di Di China Ride Hailing Business Group, Beijing, China
Pseudocode No The paper describes the methodology in prose and figures (e.g., Figure 3), but does not contain a clearly labeled pseudocode or algorithm block.
Open Source Code Yes Code is available at https://github.com/DMIRLAB/ICEVAE and https://github.com/learnwjj/ICEVAE.
Open Datasets Yes For the semi-synthetic data, we use IHDP [Hill, 2011] and TWINS [Almond et al., 2005] to validate our model s performance on complex real-world data.
Dataset Splits No The paper mentions dividing samples into experimental and observational data, and generating treatments and outcomes, but does not provide specific percentages, counts, or references to predefined train/test/validation splits needed for reproduction.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, memory amounts) used for running the experiments.
Software Dependencies No The paper does not list specific version numbers for key software components or libraries used in the implementation.
Experiment Setup No The paper describes the model architecture and training objective but states that "The implementation details regarding baselines and our method can be found in Appendix F.", indicating that specific experimental setup details such as hyperparameters are not in the main text.