Unsupervised Hierarchical Temporal Abstraction by Simultaneously Learning Expectations and Representations

Authors: Katherine Metcalf, David Leake

IJCAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Evaluations show that the temporal abstraction hierarchies generated by ENHAn CE closely match handcoded hierarchies for the test data streams.ENHAn CE was evaluated on textand speechbased data streams, using existing data streams for which concept hierarchies were already available for use as ground truth for each level of temporal abstraction
Researcher Affiliation Academia Katherine Metcalf and David Leake Computer Science Department, Indiana University {metcalka, leake}@indiana.edu
Pseudocode No The paper describes the algorithm's steps in paragraph form and with a diagram (Figure 1), but it does not include a dedicated pseudocode block or an 'Algorithm' section.
Open Source Code No The paper does not provide any statement or link indicating that the source code for the described methodology is publicly available.
Open Datasets Yes The second was the sequence of 90 words (8 unique words) used in Saffran’s infant speech comprehension experiment [Saffran et al., 1996].
Dataset Splits No The paper describes iterative training and convergence criteria for its model components, but it does not explicitly define training, validation, and testing dataset splits (e.g., percentages or sample counts).
Hardware Specification No The paper states that the model was 'implemented using Tensorflow (1.2.0)' but does not provide any specific hardware details such as GPU models, CPU types, or memory specifications used for running the experiments.
Software Dependencies Yes The model was implemented using Tensorflow (1.2.0).
Experiment Setup Yes Backpropagation was handled using the Adam optimizer.The RL policy based gating mechanism was learned using Expected Sarsa...For the SRNN, the GRNN, and the gating mechanism s policy, the convergence condition was: ||Losst Losst 1||2 1e 12