Deep Sensing: Active Sensing using Multi-directional Recurrent Neural Networks
Authors: Jinsung Yoon, William R. Zame, Mihaela van der Schaar
ICLR 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | To demonstrate the power of our method, we apply it to two real-world medical datasets with significantly improved performance. In this section, we evaluate the performance of Deep Sensing using two real-world medical datasets. Our experimental results present three sets of comparisons: active sensing, prediction, and missing value inference. |
| Researcher Affiliation | Academia | Jinsung Yoon Department of Electrical and Computer Engineering University of California, Los Angeles Los Angeles, CA 90095, USA jsyoon0823@g.ucla.edu William R. Zame Department of Mathematics Department of Economics University of California, Los Angeles Los Angeles, CA 90095, USA zame@econ.ucla.edu Mihaela van der Schaar Department of Engineering Science, University of Oxford, Oxford, UK Alan Turing Institute, London, UK mihaela.vanderschaar@eng.ox.ac.uk |
| Pseudocode | Yes | Algorithm 1 Deep Sensing Training Stage; Algorithm 2 Deep Sensing Testing Stage |
| Open Source Code | No | The paper does not provide any explicit statement about releasing source code or a link to a code repository. |
| Open Datasets | Yes | The first of these datasets is MIMIC-III (Johnson et al. (2016)) which records data on patients in intensive care units (ICU). The second of these datasets, which we call Wards, was assembled and described by Alaa et al. (2017b). |
| Dataset Splits | No | We randomly divided the dataset into a mutually exclusive training set (80%) and testing set (20%). The paper specifies a training and testing split, but does not explicitly mention a distinct validation set split. |
| Hardware Specification | Yes | For instance, on the MIMICIII dataset (23,200 samples, 40 dimensions, 25 time stamps), Deep Sensing takes less than 1 hour on a machine with i7-6900K CPU (3.2GHz x 16) and 64GB RAM. |
| Software Dependencies | No | We used interp1 package in MATLAB for interpolation algorithms, and mice and amelia packages in R for imputation algorithms. The paper mentions specific software packages (interp1, mice, amelia) but does not provide their version numbers. |
| Experiment Setup | Yes | Table 5: Configurations of the Experiments; Optimization Adam Optimization (Kingma & Ba (2014)) (learning rate = 0.05) Batch size and iterations Batch size = 100, Iterations = 1000 Depth 5 Constraints The matrix parameters are block-diagonals |