An Infinite Hidden Markov Model With Similarity-Biased Transitions

Authors: Colin Reimer Dawson, Chaofan Huang, Clayton T. Morrison

ICML 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate the model and inference method on a speaker diarization task and a harmonic parsing task using fourpart chorale data, as well as on several synthetic datasets, achieving favorable comparisons to existing models.
Researcher Affiliation Academia 1 Oberlin College, Oberlin, OH, USA 2The University of Arizona, Tucson, AZ, USA.
Pseudocode No The paper describes the Gibbs sampling algorithm in prose but does not provide pseudocode or a clearly labeled algorithm block.
Open Source Code Yes Code and additional details are available at http://colindawson.net/hdp-hmm-lt/
Open Datasets Yes The data was constructed using audio signals collected from the PASCAL 1st Speech Separation Challenge2. The underlying signal consisted of D = 16 speaker channels recorded at each of T = 2000 time steps...2http://laslab.org/Speech Separation Challenge/ The data was a corpus of 217 four-voice major key chorales by J.S. Bach from music214...4http://web.mit.edu/music21
Dataset Splits Yes The data was a corpus of 217 four-voice major key chorales by J.S. Bach from music21, 200 of which were randomly selected as a training set, with the other 17 used as a test set to evaluate surprisal (marginal log likelihood per observation) by the trained models.
Hardware Specification No No specific hardware details (e.g., GPU/CPU models, processor types, or memory amounts) used for running the experiments were mentioned in the paper.
Software Dependencies No No specific software dependencies with version numbers were mentioned. The paper only mentions "music21" and refers to "Python" without providing version information for these or other libraries.
Experiment Setup Yes For all models, all concentration and noise precision parameters are given Gamma(0.1, 0.1) priors. For the Sticky models, the ratio κ α+κ is given a Unif(0, 1) prior. We ran 5 Gibbs chains for 10,000 iterations each using the HDP-HMM-LT, Sticky-HDP-HMM-LT, HDP-HMM and Sticky-HDP-HMM models on the 200 training chorales...