Estimating the Effects of Continuous-valued Interventions using Generative Adversarial Networks

Authors: Ioana Bica, James Jordon, Mihaela van der Schaar

NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In the experiments section, we introduce a new semi-synthetic data simulation for use in the continuous intervention setting and demonstrate improvements over the existing benchmark models. We show, using semi-synthetic experiments, that our model outperforms existing benchmarks. The addition of each component results in improved performance, with the final row (our full model) demonstrating the best performance across both datasets and for all metrics.
Researcher Affiliation Academia Ioana Bica Department of Engineering Science University of Oxford, Oxford, UK The Alan Turing Institute, London, UK ioana.bica@eng.ox.ac.uk James Jordon Department of Engineering Science University of Oxford, Oxford, UK james.jordon@wolfson.ox.ac.uk Mihaela van der Schaar University of Cambridge, Cambridge, UK University of California, Los Angeles, USA The Alan Turing Institute, London, UK mv472@cam.ac.uk
Pseudocode Yes Pseudo-code for our algorithm can be found in Appendix D.
Open Source Code Yes The implementation of SCIGAN can be found at https://bitbucket.org/mvdschaar/ mlforhealthlabpub/ and at https://github.com/ioanabica/SCIGAN.
Open Datasets Yes We obtain features, x, from a real dataset (in this paper we use TCGA [22], News [19, 23]) and MIMIC III [24])
Dataset Splits No The paper mentions training datasets and evaluates performance, but it does not provide specific dataset split information (exact percentages, sample counts, or detailed splitting methodology) for training, validation, and testing.
Hardware Specification No The paper does not provide specific hardware details (exact GPU/CPU models, processor types, or memory amounts) used for running its experiments.
Software Dependencies No The paper does not provide specific ancillary software details (e.g., library or solver names with version numbers) needed to replicate the experiment.
Experiment Setup No The paper describes some aspects of the experimental setup, such as parameters for data generation and the default value of 'nw', and mentions hyperparameter optimization for benchmarks. However, it does not provide specific experimental setup details like concrete hyperparameter values (e.g., learning rate, batch size, number of epochs) for its own model.