Model Metric Co-Learning for Time Series Classification
Authors: Huanhuan Chen, Fengzhen Tang, Peter Tino, Anthony G. Cohn, Xin Yao
IJCAI 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments on synthetic and benchmark data sets con firm the effectiveness of the algorithm compared to a variety of alternative methods. |
| Researcher Affiliation | Academia | Huanhuan Chen School of Computer Science, Univ. of Sci. & Tech. of China Hefei, Anhui, China hchen@ustc.edu.cn Fengzhen Tang, Peter Tino School of Computer Science University of Birmingham Birmingham, B15 2TT, UK fxt126,pxt@cs.bham.ac.uk Anthony G Cohn School of Computing University of Leeds Leeds, LS2 9JT, UK A.G.Cohn@leeds.ac.uk Xin Yao School of Computer Science University of Birmingham Birmingham, B15 2TT, UK X.Yao@cs.bham.ac.uk |
| Pseudocode | No | The paper describes the methodology using mathematical equations and prose but does not provide pseudocode or a clearly labeled algorithm block. |
| Open Source Code | No | The paper mentions external code for comparison methods but does not provide concrete access to the source code for the MMCL methodology described. |
| Open Datasets | Yes | We used 7 data sets from the UCR Time Series Repository [Keogh et al., 2011]. |
| Dataset Splits | Yes | All (hyper) parameters, such as the MMCL trade-off parameter λ, order p in the AR kernel, number of hidden states in the HMM based Fisher kernel, regularization parameter η for ridge regression etc. have been set by 5-fold cross-validation on the training set. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., CPU/GPU models, memory) used for running its experiments. |
| Software Dependencies | No | The paper mentions LIBSVM but does not provide specific version numbers for software dependencies needed to replicate the experiment. |
| Experiment Setup | Yes | In MMCL, the number of nodes was fixed to N = 50 and 10 jumps (making the jump length 5). All (hyper) parameters, such as the MMCL trade-off parameter λ, order p in the AR kernel, number of hidden states in the HMM based Fisher kernel, regularization parameter η for ridge regression etc. have been set by 5-fold cross-validation on the training set. The SVM parameters, kernel width γ in eq. (13) and C, were tuned in the following ranges: γ {10 6, 10 5, , 101}, C {10 3, 10 2, , 103}. We also tested our MMCL method using a k-NN classi fier where k {1, 2, , 10}. |