Multivariate Time-series Imputation with Disentangled Temporal Representations

Authors: SHUAI LIU, Xiucheng Li, Gao Cong, Yile Chen, YUE JIANG

ICLR 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Empirical results show that our method outperforms existing approaches on three typical real-world datasets, especially on long time series, reducing mean absolute error by up to 50%. It also scales well to long datasets on which existing deep learning based methods struggle. Disentanglement validation experiments further highlight the robustness and accuracy of our model.
Researcher Affiliation Academia SHUAI LIU School of Computer Science and Engineering Nanyang Technological University 50 Nanyang Avenue, Singapore, 639798 SHUAI004@e.ntu.edu.sg Xiucheng Li B School of Computer Science and Technology Harbin Institute of Technology (Shenzhen) No.6, Pingshan 1st Road, Nanshan District, Shenzhen, Guangdong, China, 518055 lixiucheng@ hit.edu.cn Gao Cong & Yile Chen & Yue Jiang School of Computer Science and Engineering Nanyang Technological University 50 Nanyang Avenue, Singapore, 639798 gaocong @ntu.edu.sg,{yile001,yue013}001@e.ntu.edu.sg
Pseudocode No The paper does not include a clearly labeled pseudocode or algorithm block.
Open Source Code Yes We provide an open-source implementation of our proposed model, TIDER, at https://github. com/liuwj2000/TIDER
Open Datasets Yes Guangzhou Traffic Data. This dataset(Chen et al., 2018) contains traffic speed of 214 anonymous urban road segments for 5 days with a 10-minute sampling rate in Guangzhou, China. It results in a 214 500 multivariate time series matrix.
Dataset Splits Yes We randomly remove a subset of entries from X as validation and test datasets separately. Let r be the missing rate variable, the ratio of training/validation/test is (0.9 r)/0.1/r.
Hardware Specification Yes All experiments are conducted on a Linux workstation with a 32GB Tesla V100 GPU.
Software Dependencies Yes We implement TIDER using Python 3.6 and Pytorch 1.9, and optimize the model parameters using Adam (Kingma & Ba, 2014) with a learning rate 1e 3.
Experiment Setup Yes In this section, we evaluate the performance of TIDER by comparing with existing multivariate time series imputation methods, in terms of imputation accuracy and scalability. We also show the explanability of TIDER with several case studies. Hyperparameter sensitivity experiments are included as well to show that TIDER performs steadily under different hyperparameter settings. For more detailed hyperparameter settings, please refer to Table 7. We use gird search to select the optimal hyperparameters (D, K, Dd, and P) on the validation datasets.