Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

MIRACLE: Causally-Aware Imputation via Learning Missing Data Mechanisms

Authors: Trent Kyono, Yao Zhang, Alexis Bellot, Mihaela van der Schaar

NeurIPS 2021 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conduct extensive experiments on synthetic and a variety of publicly available datasets to show that MIRACLE is able to consistently improve imputation over a variety of benchmark methods across all three missingness scenarios: at random, completely at random, and not at random.
Researcher Affiliation Academia Trent Kyono University of California, Los Angeles EMAIL Yao Zhang University of Cambridge EMAIL Alexis Bellot University of Oxford Oxford, United Kingdom EMAIL Mihaela van der Schaar University of Cambridge University of California, Los Angeles The Alan Turing Institute EMAIL
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code Yes Source code at https://github.com/vanderschaarlab/MIRACLE.
Open Datasets Yes variety of publicly available UCI datasets [8]
Dataset Splits No The paper states "We use an 80-20 train-test split" but does not specify a validation set split.
Hardware Specification No The paper does not provide specific hardware details (like GPU/CPU models, memory, or specific computing environments) used for running its experiments.
Software Dependencies No The paper mentions "tensorflow" and other libraries like "scikit-learn" but does not provide specific version numbers for any software dependencies.
Experiment Setup Yes We performed a hyperparameter sweep (log-based) for β1 and β2 with ranges between 1e-3 and 100. By default we have β1 and β2 set to 0.1 and 1, respectively.