Hidden 1-Counter Markov Models and How to Learn Them

Authors: Mehmet Kurucan, Mete Ă–zbaltan, Sven Schewe, Dominik Wojtczak

IJCAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental All algorithms here were implemented in Python and evaluated on Intel i7 3.3 GHz CPU with 16 GB RAM. We also implemented the standard algorithms (forward, backward, Baum-Welch) for HMMs ourselves for a fair comparison. ... We tested the algorithm on the following input. We first created 6 different models... Then using each Hi, we created a random multi-set of observation sequences Oi(Hi, T, 20000)...
Researcher Affiliation Academia Ardahan University, Ardahan, Turkey 2Erzurum Technical University, Erzurum, Turkey 3University of Liverpool, Liverpool, UK
Pseudocode No While the paper provides detailed step-by-step mathematical descriptions of the adapted algorithms (Forward, Backward, Baum-Welch) with base and recursion steps, these are presented as formal definitions and equations rather than explicitly labeled 'Pseudocode' or 'Algorithm' blocks.
Open Source Code No The full source-code and the inputs used will be made freely available online. This statement indicates future availability rather than concrete access at the time of publication.
Open Datasets No The paper describes generating its own observation sequences: 'Then using each Hi, we created a random multi-set of observation sequences Oi(Hi, T, 20000) that contain 20000 observation sequences with a fixed length T = 16.' No information about public availability of this generated data, standard public datasets, or access links is provided.
Dataset Splits No The paper mentions 'a set of test observations, which were different from the observation sequences used in the learning process' implying a train/test split, but no specific percentages, counts, or explicit mention of a validation split is provided.
Hardware Specification Yes All algorithms here were implemented in Python and evaluated on Intel i7 3.3 GHz CPU with 16 GB RAM.
Software Dependencies No The paper states 'All algorithms here were implemented in Python' but does not specify the version of Python or any other software dependencies with version numbers.
Experiment Setup Yes We started with 100 different initial completely random (i.e., fully connected with probabilities picked uniformly at random) H1MM models (or HMM models). After each learning step, we discarded the bottom 25% of these models as measured by the value of their likelihood. Eventually we only had one model left that we trained until the learning process converged.