Learning Mixtures of Linear Dynamical Systems

Authors: Yanxi Chen, H. Vincent Poor

ICML 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We validate our theoretical studies with numerical experiments, confirming the efficacy of the proposed algorithm. [...] In these experiments, we fix d = 80, K = 4; moreover, let Tsubspace = 20, Tclustering = 20 and Tclassification = 5, all of which are much smaller than d. We take |Msubspace| = 30 d, |Mclustering| = 10 d, and vary |Mclassification| between [0, 5000 d]. Our experiments focus on Case 1 as defined in (8b), and we generate the labels of the sample trajectories uniformly at random.
Researcher Affiliation Academia Yanxi Chen 1 H. Vincent Poor 1 1Department of Electrical and Computer Engineering, Princeton University, Princeton, NJ 08544, USA. Correspondence to: Yanxi Chen <yanxic@princeton.edu>.
Pseudocode Yes Algorithm 1 A two-stage algorithm for mixed LDSs [...] Algorithm 2 Subspace estimation [...] Algorithm 3 Clustering [...] Algorithm 4 Least squares and covariance estimation [...] Algorithm 5 Classification
Open Source Code No The paper does not contain any explicit statements about releasing source code for the described methodology, nor does it provide links to a code repository.
Open Datasets Yes In our experiments, we work with the Motion Sense dataset (Malekzadeh et al., 2019).
Dataset Splits No The paper discusses various parameters and subsets of data used (Msubspace, Mclustering, Mclassification, Tsubspace, Tclustering, Tclassification), but it does not specify explicit training, validation, or test dataset splits (e.g., percentages or sample counts) in a way that would allow for reproduction of data partitioning for model evaluation.
Hardware Specification No The paper does not provide any specific details about the hardware (e.g., CPU, GPU models, memory, or cloud computing instances) used to run the experiments.
Software Dependencies No The paper describes algorithms and theoretical aspects but does not list specific software dependencies with version numbers (e.g., 'Python 3.8', 'PyTorch 1.9') that would be needed for replication.
Experiment Setup Yes In these experiments, we fix d = 80, K = 4; moreover, let Tsubspace = 20, Tclustering = 20 and Tclassification = 5, all of which are much smaller than d. We take |Msubspace| = 30 d, |Mclustering| = 10 d, and vary |Mclassification| between [0, 5000 d].