FlowHMM: Flow-based continuous hidden Markov models
Authors: Pawel Lorek, Rafal Nowak, Tomasz Trzcinski, Maciej Zieba
NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We perform a variety of experiments in which we compare both training strategies with a baseline of Gaussian mixture models. We show, that in terms of quality of the recovered probability distribution, accuracy of prediction of hidden states, and likelihood of unseen data, our approach outperforms the standard Gaussian methods. 5 Experimental results |
| Researcher Affiliation | Collaboration | Paweł Lorek Mathematical Institute University of Wrocław Tooploox pawel.lorek@math.uni.wroc.pl Rafał Nowak Institute of Computer Science University of Wrocław Tooploox rafal.nowak@cs.uni.wroc.pl Tomasz Trzciński Warsaw University of Technology Jagiellonian University of Cracow Tooploox, IDEAS NCBR tomasz.trzcinski@pw.edu.pl Maciej Zięba Department of Artificial Intelligence Wrocław University of Science and Technology Tooploox maciej.zieba@pwr.edu.pl |
| Pseudocode | Yes | Algorithm 1 Training using FQ technique |
| Open Source Code | Yes | We used hmmlearn library1 for the computations of all G(K) models and provide the source code for Flow HMM models2. 2https://github.com/tooploox/flowhmm |
| Open Datasets | Yes | We retrieved data [30] for a period of 9/1977-8/2017 that consist of 2082 points. [30] data.world. Dow Jones and S&P500 datasets, accessed 2022. https://data.world/chasewillden/stock-market-from-a-high-level/workspace/. We used mean air pressure from [31] dataset collected between 02/15/18 and 02/28/19. [31] Ameri GEOSS. Wind-measurementds in Papua New Guinea, 2019. data retrieved from https: //data.amerigeoss.org/dataset/857a9652-1a8f-4098-9f51-433a81583387/ resource/a8db6cec-8faf-4c8e-aee2-4fb45d1e6f14. |
| Dataset Splits | No | The paper specifies training and testing splits (e.g., "train and test observations of the same length T", "trained models on T = 1000 observations (and computed norm LLs for the remaining data)"), but does not explicitly mention a separate validation set or specific validation split percentages/counts. |
| Hardware Specification | No | The paper does not provide any specific hardware details such as GPU/CPU models, processor types, or memory amounts used for running the experiments. |
| Software Dependencies | No | The paper mentions using the 'hmmlearn' library but does not specify its version number or any other software dependencies with their respective versions. |
| Experiment Setup | Yes | We train FML and FQ models using 500 (Example 1) and 1000 (Examples 2a and 2b) epochs. For FQ we used a grid of size M = 30. During training FML model we randomly sampled subsequences of length 10^3 for each epoch. Algorithm 1 also lists Hyperparameters: L number of hidden states, αstep size, noise variance σ2 noise. |