Max-Margin Infinite Hidden Markov Models

Authors: Aonan Zhang, Jun Zhu, Bo Zhang

ICML 2014 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Empirical results on synthetic and real data sets show that our methods obtain superior performance than other competitors in both single variate classification and sequential prediction tasks.
Researcher Affiliation Academia Aonan Zhang ZAN12@TSINGHUA.EDU.CN Jun Zhu DCSZJ@TSINGHUA.EDU.CN Bo Zhang DCSZB@TSINGHUA.EDU.CN Dept. of Comp. Sci. & Tech., TNList Lab, State Key Lab of Intell. Tech. & Sys., Tsinghua University, China
Pseudocode No No explicit pseudocode or algorithm blocks were found.
Open Source Code No The paper does not provide any statement or link indicating that source code for the methodology described is publicly available.
Open Datasets Yes Parkinsons data: 'We extract 10 principal components using PCA, following (Shahbaba & Neal, 2009).'; Protein data: 'We follow (Ding & Dubchak, 2001) to split the data set into a training set...'; RGBD-Hu Da Act: 'RGBD-Hu Da Act is a home-monitoring human activity recognition data set containing both color and depth video streams (Ni et al., 2011).'
Dataset Splits Yes We adopt 5-fold cross-validation and report the average performance as well as standard deviations. (for Parkinsons and RGBD-Hu Da Act); We follow (Ding & Dubchak, 2001) to split the data set into a training set containing 313 instances and a test set consisting of 385 instances. (for Protein data)
Hardware Specification Yes All the experiments were conducted on an Intel Core i5 3.10GHZ computer with 4.0GB RAM.
Software Dependencies No The paper only states the programming language used: 'We implemented our models and re-implemented DPMNL and i M2EDM using C++.' No specific software library or dependency versions are provided.
Experiment Setup Yes In this experiment we set the initial number of states K0 = 10, the HDP concentration hyper-parameters α0 = 2, γ0 = 2, and the largemargin classifier hyper-parameters c=1, ℓ=1.6. For models based on Gibbs classifiers we set K0=20 and run 300 iterations. While for i M2EDM we set the truncation level K=20 and run 100 iterations for training.