Estimating Multi-cause Treatment Effects via Single-cause Perturbation
Authors: Zhaozhi Qian, Alicia Curth, Mihaela van der Schaar
NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We demonstrate the performance gain of SCP on extensive synthetic and semi-synthetic experiments. |
| Researcher Affiliation | Academia | Zhaozhi Qian University of Cambridge zhaozhi.qian@maths.cam.ac.uk Alicia Curth University of Cambridge amc253@cam.ac.uk Mihaela van der Schaar University of Cambridge UCLA The Alan Turing Institute mv472@cam.ac.uk |
| Pseudocode | Yes | The pseudocode is detailed in Appendix A.7 Algorithm 1. |
| Open Source Code | Yes | The implementation of SCP and the experiment code are available at https://github.com/ Zhaozhi QIAN/Single-Cause-Perturbation-Neur IPS-2021 or https://github.com/orgs/ vanderschaarlab/repositories |
| Open Datasets | No | The paper uses synthetic datasets created by the authors and the de-identified COVID-19 Hospitalization in England Surveillance System (CHESS) data, but does not provide concrete access links, DOIs, or formal citations for public availability of these datasets. |
| Dataset Splits | Yes | Each dataset contains N0 samples for training, 200 samples for validation and 4000 for testing. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU/CPU models, memory amounts, or detailed computer specifications) used for running its experiments. |
| Software Dependencies | No | The paper mentions specific algorithms and models (e.g., DR-CFR, neural networks, VSR, Deconfounder) but does not provide specific version numbers for any software dependencies or libraries used in their implementation. |
| Experiment Setup | Yes | For all neural networks, we use three hidden layers with 100 nodes each and ReLU activation functions. The weights are initialized using glorot uniform. We use Adam optimizer with learning rate 10−4 and batch size 64. Training is stopped when the validation error does not improve for 10 epochs. We use mean squared error (MSE) as the loss function. The DR-CFR parameters were set to default values as in [22]. |