DiffPO: A causal diffusion model for learning distributions of potential outcomes

Authors: Yuchen Ma, Valentyn Melnychuk, Jonas Schweisthal, Stefan Feuerriegel

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Across a wide range of experiments, we show that our method achieves state-of-the-art performance.
Researcher Affiliation Academia Yuchen Ma, Valentyn Melnychuk, Jonas Schweisthal & Stefan Feuerriegel LMU Munich Munich Center for Machine Learning yuchen.ma@lmu.de
Pseudocode No The paper describes processes and equations but does not include a formal pseudocode or algorithm block.
Open Source Code Yes Code is available at https://github.com/yccm/Diff PO.
Open Datasets Yes We thus follow prior literature (e.g.,[22, 38]) and benchmark our model using synthetic datasets. ... We estimate the CATE across ACIC 2016 & ACIC 2018, which are widely used dataset collections for CATE benchmarking [66, 12, 42]. ... IHDP dataset. This is a semi-synthetic dataset from the Infant Health and Development Program (IHDP) [25]
Dataset Splits Yes We use a ten-fold split for train/test samples (80%/20%). ... We use five random train/test splits (80% / 20%) for each dataset, tune hyperparameters on the first split, and evaluate the average out-sample on every split.
Hardware Specification Yes Experiments were carried out on 1 NVIDIA A100-PCIE-40GB.
Software Dependencies No We implemented our Diff PO in Py Torch. However, no specific version numbers for PyTorch or other software dependencies are provided.
Experiment Setup Yes The number of diffusion sampling steps is 100. Training is conducted with a batch size of 256 and a learning rate of 0.0005.