Region-Based Message Exploration over Spatio-Temporal Data Streams

Authors: Lisi Chen, Shuo Shang873-880

AAAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate the efficacy and efficiency of our proposal on two real-world datasets and the results demonstrate that our solution is capable of high efficiency and effectiveness compared with baselines.
Researcher Affiliation Collaboration Lisi Chen,1 Shuo Shang2 2 UESTC, China 1,2Inception Institute of Artificial Intelligence, UAE 1chenlisi.cs@gmail.com 2jedi.shang@gmail.com
Pseudocode Yes Algorithm 1: Greedy Region Summary (C, k, ζ, δ)
Open Source Code No The paper does not provide any explicit statements about releasing source code for the described methodology, nor does it include links to a code repository.
Open Datasets No The paper mentions using two real-life datasets, FS (from Foursquare) and TT (geo-tagged tweets), and states 'We train our model on 5000 generated instances.' However, it does not provide concrete access information (e.g., links, DOIs, specific citations with author/year) for these datasets to allow public access or reproduction.
Dataset Splits No The paper mentions training on '5000 generated instances' for the triplet network and evaluating on FS and TT datasets. However, it does not provide explicit training, validation, and test split percentages or sample counts for the main FS and TT datasets, which are crucial for reproducing the overall experiment.
Hardware Specification No The paper states 'All of the algorithms are run in memory. We report the cpu time for efficiency evaluation,' but it does not provide any specific details about the hardware used, such as CPU or GPU models, memory size, or cloud computing specifications.
Software Dependencies No The paper mentions using an 'Online Latent Dirichlet Allocation (OLDA)' model and building a 'CNN' with 'Re LU non-linearity.' However, it does not provide specific software names with version numbers for any libraries, frameworks, or other dependencies necessary to replicate the experiment.
Experiment Setup Yes For evaluating TN, we build a CNN that consists of 3 convolutional layers. The stride is set as 2 2. We use a Re LU non-linearity between two adjacent layers. The weight decay parameter λ is set to be 5 10 5. The gap parameter g is 0.25. We train our model on 5000 generated instances.