Latent Processes Identification From Multi-View Time Series

Authors: Zenan Huang, Haobo Wang, Junbo Zhao, Nenggan Zheng

IJCAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experimental results on synthetic and real-world datasets demonstrate the superiority of our method in recovering identifiable latent variables on multi-view time series.
Researcher Affiliation Academia Zenan Huang1,4 , Haobo Wang4 , Junbo Zhao4 and Nenggan Zheng 1,2,3,4 1Qiushi Academy for Advanced Studies (QAAS), Zhejiang University 2The State Key Lab of Brain-Machine Intelligence, Zhejiang University 3CCAI by MOE and Zhejiang Provincial Government (ZJU) 4College of Computer Science and Technology, Zhejiang University
Pseudocode No The paper does not contain any explicitly labeled 'Pseudocode' or 'Algorithm' blocks.
Open Source Code Yes The code is available on https://github.com/lccurious/Mu LTI.
Open Datasets Yes We use synthetic and real-world datasets for evaluating the latent process identification task. Multi-view VAR is a synthetic dataset modified from [Yao et al., 2021]. Mass-spring system is a video dataset adopted from [Li et al., 2020] specifically designed for a multi-view setting. Multi-view UCI Daily and Sports Activities [Altun et al., 2010] is a multivariate time series dataset.
Dataset Splits Yes To measure the identifiability of latent causal variables, we compute Mean Correlation Coefficient (MCC) on the validation datasets for revealing the indeterminacy up to permutation transformations, and R2 for revealing the indeterminacy up to linear transformations. We assess the accuracy of our causal relations estimations by comparing them with the actual data structure, quantified via the Structural Hamming Distance (SHD) on the validation datasets.
Hardware Specification No The paper does not specify any hardware details such as GPU models, CPU types, or memory used for running the experiments. It only mentions general settings like 'deep neural networks'.
Software Dependencies No The paper mentions using deep neural networks and the Adam optimizer but does not provide specific version numbers for any software frameworks (e.g., PyTorch, TensorFlow) or libraries used in their implementation.
Experiment Setup Yes Ground-truth latent variable dimension d is set to 10, the noise distribution is set to Laplacian(0, 0.05). We set the batch size to 2400, employ the Adam optimizer with a learning rate of 0.001, and utilize β1 = 0.01, β2 = 0.01, β3 = 1e 5. To verify the influence of the overlapping ratio of views, we select dc {10, 4, 2, 0} for evaluation. The time-lag L of the causal transition module is set to equal the ground truth. We set the time lag L = 2 for the causal transition module. The latent dimensions of view 1 and view 2 are set to d1 = 12 and d2 = 9 respectively, the shared latent dimension is set to dc = 3, and the complete latent dimension accounts for d = 18. The max time lag of the causal transition module is set to L = 1.