Multi-Time Attention Networks for Irregularly Sampled Time Series

Authors: Satya Narayan Shukla, Benjamin Marlin

ICLR 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We investigate the performance of this framework on interpolation and classification tasks using multiple datasets. Our results show that the proposed approach performs as well or better than a range of baseline and recently proposed models while offering significantly faster training times than current state-of-the-art methods.
Researcher Affiliation Academia Satya Narayan Shukla & Benjamin M. Marlin College of Information and Computer Sciences University of Massachusetts Amherst Amherst, MA 01003, USA {snshukla,marlin}@cs.umass.edu
Pseudocode No The paper provides architectural diagrams and mathematical equations for its model but does not include structured pseudocode or algorithm blocks.
Open Source Code Yes 1Implementation available at : https://github.com/reml-lab/m TAN
Open Datasets Yes All the datasets used in the experiments are publicly available and can be downloaded using the following links: Physio Net: https://physionet.org/content/challenge-2012/ MIMIC-III: https://mimic.physionet.org/ Human Activity: https://archive.ics.uci.edu/ml/datasets/Localization+ Data+for+Person+Activity.
Dataset Splits Yes We randomly divide the data set into a training set containing 80% of the instances, and a test set containing the remaining 20% of instances. We use 20% of the training data for validation.
Hardware Specification Yes All experiments were run on a Nvidia Titan X GPU.
Software Dependencies No The paper mentions the use of Adam Optimizer and GRU models but does not provide specific version numbers for any software dependencies.
Experiment Setup Yes For classification, experiments are run for 300 iteration with learning rate 0.0001, while for interpolation task experiments are run for 500 iterations with learning rate 0.001.