Time-Aware Knowledge Representations of Dynamic Objects with Multidimensional Persistence
Authors: Baris Coskunuzer, Ignacio Segovia-Dominguez, Yuzhou Chen, Yulia R. Gel
AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We derive theoretical guarantees of TMP vectorizations and show its utility, in application to forecasting on benchmark traffic flow, Ethereum blockchain, and electrocardiogram datasets, demonstrating the competitive performance, especially, in scenarios of limited data records. In addition, our TMP method improves the computational efficiency of the state-of-the-art multipersistence summaries up to 59.5 times. 6 Experiments Datasets: We consider three types of data: two widely used benchmark datasets on California (CA) traffic (Chen et al. 2001) and electrocardiography (ECG5000) (Chen et al. 2015a), and the newly emerged data on Ethereum blockchain tokens (Shamsi et al. 2022). Experimental Results We compare our TMP-Nets with 6 state-of-the-art baselines. We use three standard performance metrics Mean Absolute Error (MAE), Root Mean Square Error (RMSE), and Mean absolute percentage error (MAPE). |
| Researcher Affiliation | Academia | 1University of Texas at Dallas, Department of Mathematical Sciences 2West Virginia University, School of Mathematical & Data Sciences 3Temple University, Department of Computer and Information Sciences 4National Science Foundation coskunuz@utdallas.edu, Ignacio.Segovia Dominguez@mail.wvu.edu, yuzhou.chen@temple.edu, ygl@utdallas.edu |
| Pseudocode | No | The paper describes the proposed methods using mathematical notation and textual explanations, but it does not include any explicitly labeled pseudocode or algorithm blocks. |
| Open Source Code | Yes | Our source code is available at the link 1. 1https://www.dropbox.com/sh/h28f1cf98t9xmzj/AACBavv Hc ct CB1FVQNyf-XRa?dl=0 |
| Open Datasets | Yes | Datasets: We consider three types of data: two widely used benchmark datasets on California (CA) traffic (Chen et al. 2001) and electrocardiography (ECG5000) (Chen et al. 2015a), and the newly emerged data on Ethereum blockchain tokens (Shamsi et al. 2022). |
| Dataset Splits | No | The paper mentions using specific time lengths for traffic data ('T = 1,000 and T = 2,000') and discusses metrics like MAE, RMSE, and MAPE, but it does not explicitly provide percentages or counts for train/validation/test dataset splits. It states 'More details descriptions of datasets can be found in Appendix B.' but these details are not in the provided text. |
| Hardware Specification | No | The paper does not provide specific details about the hardware used for running the experiments, such as GPU or CPU models, or memory specifications. It mentions 'computational efficiency' but not the physical setup. |
| Software Dependencies | No | The paper does not list specific software dependencies with their version numbers (e.g., programming languages, libraries, frameworks, or solvers). |
| Experiment Setup | No | The paper mentions 'We provide further details on the experimental setup and empirical evaluation in Appendix B.' However, within the provided text, there are no specific hyperparameters (e.g., learning rate, batch size, number of epochs) or detailed system-level training settings mentioned. |