Encoding Time-Series Explanations through Self-Supervised Model Behavior Consistency

Authors: Owen Queen, Tom Hartvigsen, Teddy Koker, Huan He, Theodoros Tsiligkaridis, Marinka Zitnik

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate TIMEX on eight synthetic and real-world datasets and compare its performance against state-of-the-art interpretability methods. We also conduct case studies using physiological time series. Quantitative evaluations demonstrate that TIMEX achieves the highest or second-highest performance in every metric compared to baselines across all datasets.
Researcher Affiliation Collaboration Owen Queen Harvard University owen_queen@hms.harvard.edu Thomas Hartvigsen University of Virginia, MIT hartvigsen@virginia.edu Teddy Koker MIT Lincoln Laboratory thomas.koker@ll.mit.edu Huan He Harvard University huan_he@hms.harvard.edu Theodoros Tsiligkaridis MIT Lincoln Laboratory tsili@ll.mit.edu Marinka Zitnik Harvard University marinka@hms.harvard.edu
Pseudocode Yes Algorithm 1: Landmark filtration
Open Source Code Yes TIMEX is at https://github.com/mims-harvard/Time X
Open Datasets Yes We employ four datasets from real-world time series classification tasks: ECG [81] ECG arrhythmia detection; PAM [82] human activity recognition; Epilepsy [83] EEG seizure detection; and Boiler [84] mechanical fault detection.
Dataset Splits Yes We create 5,000 training samples, 1,000 testing samples, and 100 validation samples for each dataset.
Hardware Specification Yes For computational resources, we use a GPU cluster with various GPUs, ranging from 32GB Tesla V100s GPU to 48GB RTX8000 GPU.
Software Dependencies Yes We implemented all methods in this study using Python 3.8+ and Py Torch 2.0.
Experiment Setup Yes Table 6: Training parameters for TIMEX across all ground-truth attribution experiments. Table 7: Training parameters for TIMEX across all real-world datasets used for the occlusion experiments. Table 8: Training parameters for transformer predictors across all ground-truth attribution experiment datasets. Table 9: Training parameters for TIMEX across all real-world datasets used for the occlusion experiments.