Multi-task Causal Learning with Gaussian Processes

Authors: Virginia Aglietti, Theodoros Damoulas, Mauricio Álvarez, Javier González

NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We test both the quality of its predictions and its calibrated uncertainties. Compared to single-task models, DAG-GP achieves the best fitting performance in a variety of real and synthetic settings. In addition, it helps to select optimal interventions faster than competing approaches when used within sequential decision making frameworks, like active learning or Bayesian optimization.
Researcher Affiliation Collaboration Virginia Aglietti University of Warwick The Alan Turing Institute V.Aglietti@warwick.ac.uk Theodoros Damoulas University of Warwick The Alan Turing Institute T.Damoulas@warwick.ac.uk Mauricio A. Álvarez University of Sheffield Mauricio.Alvarez@sheffield.ac.uk Javier González Microsoft Research Cambridge Gonzalez.Javier@microsoft.com
Pseudocode No The paper does not contain any explicitly labeled 'Pseudocode' or 'Algorithm' blocks or figures.
Open Source Code Yes Code and data for all the experiments is provided at https://github.com/VirgiAgl/DAG-GP.
Open Datasets Yes DAG3 is taken from [33] and [13] and is used to model the causal effect of statin drugs on the levels of prostate specific antigen (PSA).
Dataset Splits No The paper mentions using 'observational dataset DO' and 'interventional dataset DI' of varying sizes and initializations, but it does not specify explicit training, validation, and test dataset splits (e.g., percentages or sample counts) for reproducibility.
Hardware Specification No The paper does not provide any specific details about the hardware (e.g., GPU/CPU models, memory) used for running the experiments.
Software Dependencies No The paper does not list specific version numbers for any software components or libraries used in the experiments.
Experiment Setup No The paper mentions setting the size of the DI dataset for different DAGs (e.g., 'size of DI to 5 |T|'), and states that 'Implementation details are given in the supplement', but it does not provide specific hyperparameters or system-level training settings in the main text.