Towards Understanding Evolving Patterns in Sequential Data

Authors: QIUHAO Zeng, Long-Kai Huang, Qi CHEN, Charles X. Ling, Boyu Wang

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments on synthetic and real-world datasets including images and tabular data validate the efficacy of our EVORATE method.
Researcher Affiliation Collaboration Qiuhao Zeng Western University qzeng53@uwo.ca Long-Kai Huang Tencent AI Lab hlongkai@gmail.com Qi Chen Laval University qi.chen.1@ulaval.ca Charles Ling Western University charles.ling@uwo.ca Boyu Wang Western University bwang@csd.uwo.ca
Pseudocode Yes Algorithm 1 EVORATE: Data is sampled in a sequential manner with correspondence; Algorithm 2 EVORATEW: Data is sampled from different timestamps but without correspondence
Open Source Code Yes The codes are available on Git Hub: https://github.com/HardworkingPearl/EvoRate.
Open Datasets Yes Experiments on synthetic and real-world datasets including images and tabular data validate the efficacy of our EVORATE method. ... M4 [35] ... Crypto [4] ... Player Trajectory [32] ... Rotated MNIST (RMNIST) [22] is an adaptation of the popular MNIST digit dataset [15] ... Portraits [23] ... Caltran [26] ... Power Supply [14] ... KITTI dataset [20]
Dataset Splits Yes We use the original training set from the competition and do an 80%-10%-10% training-validation-test split. ... We split the domains into source domains (1-22 domains), intermediate domains (22-25 domains), and target domains (26-30 domains). The intermediate domains are utilized as the validation set.
Hardware Specification Yes All experiments are carried out on 498G memory, 2 x AMD Milan 7413 @ 2.65 GHz 128M cache L3, and 2 x NVidia A100SXM4 (40 GB memory).
Software Dependencies No The paper mentions using the 'Pot: Python optimal transport' package [19] and models like LSTM and Transformer, but does not provide specific version numbers for any software dependencies, such as Python, PyTorch/TensorFlow, or libraries.
Experiment Setup No The paper describes model architectures such as 'fully-connected networks with Re LU activations' and uses LSTM, and specifies varying 'k' values for order approximation and number of features. However, it does not explicitly provide common training hyperparameters like learning rate, batch size, number of epochs, or optimizer settings.