Explaining Time Series via Contrastive and Locally Sparse Perturbations

Authors: Zichuan Liu, Yingying ZHANG, Tianchun Wang, Zefan Wang, Dongsheng Luo, Mengnan Du, Min Wu, Yi Wang, Chunlin Chen, Lunting Fan, Qingsong Wen

ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Empirical studies on both synthetic and real-world datasets show that Contra LSP outperforms state-of-the-art models, demonstrating a substantial improvement in explanation quality for time series data. In this section, we evaluate the explainability of the proposed method on synthetic datasets (where truth feature importance is accessible) for both regression (white-box) and classification (black-box), as well as on more intricate real-world clinical tasks.
Researcher Affiliation Collaboration Zichuan Liu1,2 , Yingying Zhang2 , Tianchun Wang3 , Zefan Wang2,4, Dongsheng Luo5, Mengnan Du6, Min Wu7, Yi Wang8, Chunlin Chen1 , Lunting Fan2, and Qingsong Wen2 1Nanjing University, 2Ailibaba Group, 3Pennsylvania State University, 4Tsinghua University, 5Florida International University, 6New Jersey Institute of Technology, 7A*STAR, 8The University of Hong Kong
Pseudocode Yes Algorithm 1 Selection of a triplet sample
Open Source Code Yes The source code is available at https://github.com/zichuan-liu/Contra LSP.
Open Datasets Yes We use the MIMIC-III dataset (Johnson et al., 2016), which is a comprehensive clinical time series dataset encompassing various vital and laboratory measurements.
Dataset Splits No The paper describes total sample sizes and training procedures, but it does not explicitly provide the specific percentages or counts for training, validation, and test dataset splits needed for reproduction.
Hardware Specification No The paper does not specify the hardware used for running the experiments, such as specific CPU or GPU models.
Software Dependencies No The paper refers to its codebase for implementation details but does not explicitly list specific software dependencies with version numbers (e.g., Python, PyTorch, CUDA versions).
Experiment Setup Yes We list hyperparameters for each experiment performed in Table 6, and for the triplet loss, the marginal parameter b is consistently set to 1. The size of K+ and K are chosen to depend on the number of positive and negative samples (|Ω+| and |Ω |). In the perturbation function φθ1( ), we use a single-layer bidirectional GRU, which corresponds to a generalization of the fixed perturbation. In the trend function τθ2( ), we employ an independent MLP for each observation d to find its trend, whose details are shown in Table 7.