Treatment Effect Estimation with Disentangled Latent Factors

Authors: Weijia Zhang, Lin Liu, Jiuyong Li10923-10930

AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results demonstrate the effectiveness of the proposed method on a wide range of synthetic, benchmark, and real-world datasets.
Researcher Affiliation Academia Weijia Zhang*, Lin Liu, Jiuyong Li University of South Australia weijia.zhang.xh@gmail.com,{lin.liu,jiuyong.li}@unia.edu.au
Pseudocode No No, the paper describes the model and inference steps in text and equations but does not include a clearly labeled pseudocode or algorithm block.
Open Source Code Yes The code is available at https://github.com/Weijia Zhang24/TEDVAE.
Open Datasets Yes The 2016 Atlantic Causal Inference Challenge (ACIC2016) (Dorie et al. 2019)... This dataset can be accessed at https: //github.com/vdorie/aciccomp/tree/master/2016. ... The Infant Health and Development Program (IHDP)... The datasets can be accessed at https: //github.com/vdorie/npci.
Dataset Splits Yes with a 60%/30%/10% train/validation/test splitting proportions.
Hardware Specification No No, the paper does not provide specific hardware details (e.g., exact GPU/CPU models, memory amounts) used for running its experiments. It only mentions general aspects of the neural network architecture like number of layers and neurons.
Software Dependencies No No, the paper does not specify any software dependencies with version numbers (e.g., 'PyTorch 1.9', 'Python 3.8').
Experiment Setup Yes As a result, we set the latent dimensionality parameters as Dzy = 5, Dzt = 15, Dzc = 15 and set the weight for auxiliary losses as αt = αy = 100. For all the parametrized neural networks, we use 5 hidden layers and 100 hidden neurons in each layer, with ELU activation. with a 60%/30%/10% train/validation/test splitting proportions.