TimeX++: Learning Time-Series Explanations with Information Bottleneck
Authors: Zichuan Liu, Tianchun Wang, Jimeng Shi, Xu Zheng, Zhuomin Chen, Lei Song, Wenqian Dong, Jayantha Obeysekera, Farhad Shirani, Dongsheng Luo
ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We evaluate TIMEX++ on both synthetic and real-world datasets comparing its performance against leading baselines, and validate its practical efficacy through case studies in a real-world environmental application. |
| Researcher Affiliation | Collaboration | 1Nanjing University 2Microsoft Research Asia 3Pennsylvania State University 4Florida International University. |
| Pseudocode | Yes | We summarize the pseudo-code of TIMEX++ in Appendix E. Algorithm 1 The pseudo-code of TIMEX++ |
| Open Source Code | Yes | The source code is available at https://github.com/zichuan-liu/ Time Xplusplus. |
| Open Datasets | Yes | For each synthetic dataset, we have generated 5,000 training samples, 1,000 test samples, and 100 validation samples. We also select two representative datasets Wafer and Freezer Regular in the UCR archive (Dau et al., 2019) to conduct occlusion experiments |
| Dataset Splits | Yes | For each synthetic dataset, we have generated 5,000 training samples, 1,000 test samples, and 100 validation samples. All reported results for our method, baselines, and ablations are presented as mean std from 5 fold crossvalidation. |
| Hardware Specification | Yes | For computational resources, our experiments are performed on an NVIDIA 80GB Tesla A100 GPU. |
| Software Dependencies | No | The paper mentions employing a Transformer classifier and using open-source codes for baselines (Dynamask, TIMEX) but does not provide specific version numbers for software dependencies like Python, PyTorch, or other libraries. |
| Experiment Setup | Yes | We employ a Transformer (Vaswani et al., 2017) classifier as the black-box model f to explain, where the hyperparameters are optimized to ensure model performance. For the above nine datasets, we list hyperparameters for each experiment performed in Table 9. |