Functional Subspace Clustering with Application to Time Series
Authors: Mohammad Taha Bahadori, David Kale, Yingying Fan, Yan Liu
ICML 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results on both synthetic data and real clinical time series show that FSC outperforms both standard time series clustering and state-of-the-art subspace clustering. |
| Researcher Affiliation | Academia | Mohammad Taha Bahadori MOHAMMAB@USC.EDU David Kale DKALE@USC.EDU Yingying Fan FANYINGY@MARSHALL.USC.EDU Yan Liu YANLIU.CS@USC.EDU University of Southern California, Los Angeles, CA 90089 Laura P. and Leland K. Whittier Virtual PICU, Children s Hospital Los Angeles, Los Angeles, CA 90027 |
| Pseudocode | Yes | Algorithm 1: Functional subspace clustering. |
| Open Source Code | No | The paper does not provide explicit links to open-source code or state that the code for their method is publicly available. |
| Open Datasets | Yes | The Physionet dataset1 is a publicly available collection of multivariate clinical time series, similar to ICU but with additional variables. The time series are also 48 hours long and include in-hospital mortality as a binary label. 1http://physionet.org/challenge/2012/ |
| Dataset Splits | Yes | For evaluation, we create 30 randomly divided training and testing partitions. For each partition, we train the RBF-SVM on the training partition set using 5-fold cross validation, then test it on the corresponding test set. |
| Hardware Specification | No | The paper does not explicitly describe the hardware used for its experiments. |
| Software Dependencies | No | The paper does not specify version numbers for any software dependencies used in the experiments. |
| Experiment Setup | Yes | In all of the datasets, we normalize each time series to have zero mean and unit variance. Then, we apply each algorithm to learn the affinity matrix and then extract lower dimensional representations as described in Algorithm 2 in Appendix C. We then evaluate the utility of these representations by using them as features in a RBF-SVM binary classifier. For evaluation, we create 30 randomly divided training and testing partitions. For each partition, we train the RBF-SVM on the training partition set using 5-fold cross validation, then test it on the corresponding test set. |