MIRACLE: Causally-Aware Imputation via Learning Missing Data Mechanisms

Authors: Trent Kyono, Yao Zhang, Alexis Bellot, Mihaela van der Schaar

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conduct extensive experiments on synthetic and a variety of publicly available datasets to show that MIRACLE is able to consistently improve imputation over a variety of benchmark methods across all three missingness scenarios: at random, completely at random, and not at random.
Researcher Affiliation Academia Trent Kyono University of California, Los Angeles tmkyono@ucla.edu Yao Zhang University of Cambridge yz555@cam.ac.uk Alexis Bellot University of Oxford Oxford, United Kingdom alexis.bellot@eng.ox.ac.uk Mihaela van der Schaar University of Cambridge University of California, Los Angeles The Alan Turing Institute mv472@cam.ac.uk
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code Yes Source code at https://github.com/vanderschaarlab/MIRACLE.
Open Datasets Yes variety of publicly available UCI datasets [8]
Dataset Splits No The paper states "We use an 80-20 train-test split" but does not specify a validation set split.
Hardware Specification No The paper does not provide specific hardware details (like GPU/CPU models, memory, or specific computing environments) used for running its experiments.
Software Dependencies No The paper mentions "tensorflow" and other libraries like "scikit-learn" but does not provide specific version numbers for any software dependencies.
Experiment Setup Yes We performed a hyperparameter sweep (log-based) for β1 and β2 with ranges between 1e-3 and 100. By default we have β1 and β2 set to 0.1 and 1, respectively.