On the Identifiability of Switching Dynamical Systems

Authors: Carles Balsells-Rodas, Yixin Wang, Yingzhen Li

ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Throughout empirical studies, we demonstrate the practicality of identifiable Switching Dynamical Systems for segmenting high-dimensional time series such as videos, and showcase the use of identifiable Markov Switching Models for regime-dependent causal discovery in climate data. We evaluate the identifiable MSMs and SDSs with three experiments: (1) simulation studies with ground truth available for verification of the identifiability results; (2) regime-dependent causal discovery in climate data with identifiable MSMs; and (3) segmentation of high-dimensional sequences of salsa dancing using MSMs and SDSs.
Researcher Affiliation Academia 1Imperial College London 2University of Michigan.
Pseudocode No No pseudocode or algorithm blocks are present in the paper.
Open Source Code Yes Code for inference and data generation can be found in https://github. com/charlio23/identifiable-SDS.
Open Datasets Yes We explore regime-dependent causal discovery using climate data from Saggioro et al. (2020). The data consists on monthly observations of El Niño Southern Oscillation (ENSO) and All India Rainfall (AIR) from 1871 to 2016. We consider salsa dancing sequences from CMU mocap data and Hip-Hop videos from AIST Dance DB (Tsuchida et al., 2019)
Dataset Splits Yes To generate videos, we subsample the sequences by a factor of 8, and augment the data by rendering human meshes with rotated perspectives and offsetting the subsampled trajectories. To do so, we adapt the available code from Mahmood et al. (2019), and generate 10080 train samples and 560 test samples.
Hardware Specification Yes All the experiments are implemented in Pytorch (Paszke et al., 2019) and carried out on NVIDIA RTX 2080Ti GPUs, except for the experiments with videos (synthetic and salsa), where we used NVIDIA RTX A6000 GPUs.
Software Dependencies No The paper mentions "Pytorch" but does not specify a version number or list multiple software components with specific versions.
Experiment Setup Yes For the synthetic experiments, we use batch size 100, and we train for 100 epochs. We use ADAM optimiser (Kingma & Ba, 2015) with an initial learning rate 5 10 4, and decrease it by a factor of 0.5 every 30 epochs. To avoid state collapse, we perform an initial warm-up phase for 5 epochs, where we train with fixed discrete state parameters π and Q, which we fix to uniform distributions.