Neighborhood Contrastive Learning Applied to Online Patient Monitoring

Authors: Hugo Yèche, Gideon Dresdner, Francesco Locatello, Matthias Hüser, Gunnar Rätsch

ICML 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our experiments demonstrate a marked improvement over existing work applying contrastive methods to medical timeseries. (Abstract) and 5. Experimental Setup
Researcher Affiliation Collaboration 1Department of Computer Science, ETH Z urich, Switzerland 2Amazon (most work was done when Francesco was at ETH Zurich and MPI-IS).
Pseudocode No The paper describes methods and processes in narrative text and diagrams (Figure 2), but does not provide structured pseudocode or algorithm blocks.
Open Source Code Yes 1https://github.com/ratschlab/ncl
Open Datasets Yes MIMIC-III Benchmark. The MIMIC-III dataset (Johnson et al., 2016) is the most commonly used dataset for tasks related to EHR data. (Section 5.1)
Dataset Splits Yes We used early stopping on validation set loss and an Adam optimizer. (Section 5.3)
Hardware Specification No The paper does not explicitly describe the specific hardware (e.g., GPU models, CPU types, or detailed computing infrastructure) used to run the experiments.
Software Dependencies No The paper mentions certain components like 'Adam optimizer' and 'Temporal Convolutional Networks (TCN)' but does not provide specific version numbers for any software libraries or dependencies used in the experiments.
Experiment Setup Yes We trained all unsupervised methods for 25k steps with a batch size of 2048. We used an Adam optimizer with a linear warm-up between 1e-5 and 1e-3 for 2.5k steps... We used a temperature of 0.1, a queue of size 65536, and an embedding size of 64 for all tasks... for NCL(nw) we chose α = 0.3 and w = 16 on MIMIC-III Benchmark and α = 0.4 and w = 12 on Physionet 2019.