Encoding and Recall of Spatio-Temporal Episodic Memory in Real Time
Authors: Poo-Hee Chang, Ah-Hwee Tan
IJCAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our empirical results based on a public domain data set show that STEM displays a high level of efficiency and robustness in encoding and retrieval with both partial and noisy search cues when compared with a stateof-the-art associative memory model. Compared with the GAM model, our experiments show that the STEM model is able to encode the over 40,000 extracted events in seconds and supports recall of the stored events using partial and noisy search cues. |
| Researcher Affiliation | Academia | Poo-Hee Chang and Ah-Hwee Tan School of Computer Science and Engineering Nanyang Technological University Singapore 639798 {phchang, asahtan}@ntu.edu.sg |
| Pseudocode | Yes | Algorithm 1 Spatial representation encoding; Algorithm 2 Event encoding; Algorithm 3 Event retrieval |
| Open Source Code | No | The paper does not provide a link to open-source code for the methodology described, nor does it explicitly state that the code is available. |
| Open Datasets | Yes | We use the CAVIAR data set [Fisher, 2004; Fisher et al., 2005]. It contains 28 videos of a lobby entrance, together with hand-labeled ground truths of the surveillance activities. |
| Dataset Splits | No | The paper mentions using "1,000 randomly selected events from the data set are used to form the cues and be tested" for retrieval experiments, but it does not specify a general training, validation, and test split for the main model or dataset, only for the retrieval cue testing. |
| Hardware Specification | No | The paper mentions typical computer usage and computational times (e.g., 367.45 microseconds), but it does not provide specific hardware details like GPU/CPU models, processor types, or memory amounts used for running experiments. |
| Software Dependencies | No | The paper mentions using Fusion ART and comparing against GAM, but it does not list specific software dependencies with version numbers for its implementation (e.g., Python, PyTorch, specific libraries with versions). |
| Experiment Setup | Yes | The spatial representation is learned with vigilance values of ρ = 0.99, the contribution parameter γ = 0.5 and choice parameter α = 0.001 on both the coordinate and landmark fields. |