Causal-StoNet: Causal Inference for High-Dimensional Complex Data

Authors: Yaxin Fang, Faming Liang

ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive numerical studies indicate that the proposed approach outperforms existing ones." and "4 Numerical Examples
Researcher Affiliation Academia Yaxin Fang Department of Statistics Purdue University West Lafayette, IN 47907, USA fang230@purdue.edu Faming Liang Department of Statistics Purdue University West Lafayette, IN 47907, USA fmliang@purdue.edu
Pseudocode Yes Algorithm 1: An Adaptive SGHMC algorithm for training Sto Net
Open Source Code Yes The code of the experiments is available at: https://github.com/nixay/Causal-Sto Net
Open Datasets Yes The Causal-Sto Net is compared with baseline methods on 10 synthetic datasets with homogeneous treatment effect from the Atlantic Causal Inference Conference (ACIC) 2019 Data Challenge." and "The Breast Cancer dataset from the TCGA database collects clinical data and gene expression data for breast cancer patients.
Dataset Splits Yes ntrain {800, 1600, 2400, 3200, 4000}, nval {200, 400, 600, 800, 1000}, ntest {200, 400, 600, 800, 1000}." and "We conducted the experiment in three-fold cross validation, where we partitioned the dataset into three subsets, trained the model using two subsets and estimated the ATE using the remaining one.
Hardware Specification No The paper does not provide specific hardware details (such as GPU or CPU models, or memory specifications) used for running the experiments.
Software Dependencies No The paper does not provide specific software dependencies with version numbers (e.g., library names with versions like TensorFlow 2.x or PyTorch 1.x) used in the experiments.
Experiment Setup Yes The network has 3 hidden layers with structure 32-16-8-1, where Tanh is used as the activation function. The epochs for pre-training, training, and refining after sparsification are 50, 200, and 200, respectively. Initial imputation learning rate is set as ϵ1 = 3 10 3, ϵ2 = 3 10 4, ϵ3 = 5 10 7... Initial learning rate for training stage are γ1 = 10 3, γ2 = 10 6, γ3 = 10 8, γ4 = 5 10 13...