Scalable Adaptation of State Complexity for Nonparametric Hidden Markov Models
Authors: Michael C. Hughes, William T. Stephenson, Erik Sudderth
NeurIPS 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments demonstrate the reliability and scalability of our approach... |
| Researcher Affiliation | Academia | Michael C. Hughes, William Stephenson, and Erik B. Sudderth Department of Computer Science, Brown University, Providence, RI 02912 mhughes@cs.brown.edu, wtstephe@gmail.com, sudderth@cs.brown.edu |
| Pseudocode | No | No structured pseudocode or algorithm blocks were found. The paper describes algorithmic steps in narrative text. |
| Open Source Code | Yes | We have released an open-source Python implementation which can parallelize local inference steps across sequences. |
| Open Datasets | Yes | We study 21 unrelated audio recordings of meetings with an unknown number of speakers from the NIST 2007 speaker diarization challenge [23]. |
| Dataset Splits | No | The paper does not provide specific details about training, validation, and test dataset splits (e.g., percentages, sample counts, or predefined split citations). It refers to training and testing but lacks explicit information on how data was partitioned for validation. |
| Hardware Specification | No | No specific hardware details (e.g., exact GPU/CPU models, processor types, or memory amounts) used for running experiments were found. The paper mentions parallelization but without hardware specifications. |
| Software Dependencies | No | The paper states it uses "Python code" but does not provide specific version numbers for Python or any ancillary software libraries or packages (e.g., PyTorch, TensorFlow, scikit-learn, etc.) that would be needed to replicate the experiment. |
| Experiment Setup | No | The paper mentions some general settings like starting with '1 state' or '25 states' and values for 'κ', but it does not provide specific, comprehensive experimental setup details such as learning rates, batch sizes, specific optimizer configurations, or detailed training schedules (e.g., number of epochs). |