Representation Learning for Treatment Effect Estimation from Observational Data

Authors: Liuyi Yao, Sheng Li, Yaliang Li, Mengdi Huai, Jing Gao, Aidong Zhang

NeurIPS 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results on synthetic and three real-world datasets demonstrate the advantages of the proposed SITE method, compared with the state-of-the-art ITE estimation methods.
Researcher Affiliation Collaboration Liuyi Yao SUNY at Buffalo liuyiyao@buffalo.edu Sheng Li University of Georgia sheng.li@uga.edu Yaliang Li Tencent Medical AI Lab yaliangli@tencent.com Mengdi Huai SUNY at Buffalo mengdihu@buffalo.edu Jing Gao SUNY at Buffalo jing@buffalo.edu Aidong Zhang SUNY at Buffalo azhang@buffalo.edu
Pseudocode No The paper describes the proposed method using prose, mathematical equations, and figures, but it does not contain any structured pseudocode or algorithm blocks.
Open Source Code Yes The code of SITE is available at https://github.com/Osier-Yi/SITE.
Open Datasets Yes IHDP and Jobs dataset are adopted in [30]... The twins dataset comes from the all twins birth in the USA between 1989 1991 [2].
Dataset Splits No The paper mentions 'training dataset' and 'test dataset' but does not provide specific percentages, sample counts, or detailed methodologies for train/validation/test splits, nor does it explicitly mention a separate 'validation' set or how the splits were performed (e.g., random seed, stratified).
Hardware Specification Yes Also, we gratefully acknowledge the support of NVIDIA Corporation with the donation of the Titan Xp GPU used for this research.
Software Dependencies No The paper mentions general software components like 'Adam' (optimizer), 'Dropout', and 'Re LU activation function' but does not provide specific version numbers for any libraries, frameworks, or programming languages used (e.g., TensorFlow 2.x, PyTorch 1.x, Python 3.x).
Experiment Setup No The paper describes the model architecture (feed-forward neural networks with dh hidden layers, ReLU activation) and the optimizer (Adam). However, it does not provide specific hyperparameter values such as learning rate, batch size, number of epochs, or the exact number of hidden layers (dh is a variable, not a concrete number).