Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Towards Understanding Evolving Patterns in Sequential Data
Authors: QIUHAO Zeng, Long-Kai Huang, Qi CHEN, Charles X. Ling, Boyu Wang
NeurIPS 2024 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments on synthetic and real-world datasets including images and tabular data validate the efficacy of our EVORATE method. |
| Researcher Affiliation | Collaboration | Qiuhao Zeng Western University EMAIL Long-Kai Huang Tencent AI Lab EMAIL Qi Chen Laval University EMAIL Charles Ling Western University EMAIL Boyu Wang Western University EMAIL |
| Pseudocode | Yes | Algorithm 1 EVORATE: Data is sampled in a sequential manner with correspondence; Algorithm 2 EVORATEW: Data is sampled from different timestamps but without correspondence |
| Open Source Code | Yes | The codes are available on Git Hub: https://github.com/HardworkingPearl/EvoRate. |
| Open Datasets | Yes | Experiments on synthetic and real-world datasets including images and tabular data validate the efficacy of our EVORATE method. ... M4 [35] ... Crypto [4] ... Player Trajectory [32] ... Rotated MNIST (RMNIST) [22] is an adaptation of the popular MNIST digit dataset [15] ... Portraits [23] ... Caltran [26] ... Power Supply [14] ... KITTI dataset [20] |
| Dataset Splits | Yes | We use the original training set from the competition and do an 80%-10%-10% training-validation-test split. ... We split the domains into source domains (1-22 domains), intermediate domains (22-25 domains), and target domains (26-30 domains). The intermediate domains are utilized as the validation set. |
| Hardware Specification | Yes | All experiments are carried out on 498G memory, 2 x AMD Milan 7413 @ 2.65 GHz 128M cache L3, and 2 x NVidia A100SXM4 (40 GB memory). |
| Software Dependencies | No | The paper mentions using the 'Pot: Python optimal transport' package [19] and models like LSTM and Transformer, but does not provide specific version numbers for any software dependencies, such as Python, PyTorch/TensorFlow, or libraries. |
| Experiment Setup | No | The paper describes model architectures such as 'fully-connected networks with Re LU activations' and uses LSTM, and specifies varying 'k' values for order approximation and number of features. However, it does not explicitly provide common training hyperparameters like learning rate, batch size, number of epochs, or optimizer settings. |