Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

Enhancing Masked Time-Series Modeling via Dropping Patches

Authors: Tianyu Qiu, Yi Xie, Hao Niu, Yun Xiong, Xiaofeng Gao

AAAI 2025 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental This paper conducts comprehensive experiments to verify the effectiveness of the method and analyze its internal mechanism. Empirically, Drop Patch strengthens the attention mechanism, reduces information redundancy and serves as an efficient means of data augmentation. Theoretically, it is proved that Drop Patch slows down the rate at which the Transformer representations collapse into the rank-1 linear subspace by randomly dropping patches, thus optimizing the quality of the learned representations.
Researcher Affiliation Academia 1Shanghai Key Lab of Data Science, School of Computer Science, Fudan University, Shanghai, China 2Mo E Key Lab of Artificial Intelligence, Shanghai Jiao Tong University, Shanghai, China EMAIL EMAIL
Pseudocode No The paper describes the method in the 'Method' section with textual descriptions and formulas, but does not include any clearly labeled pseudocode or algorithm blocks.
Open Source Code No The paper mentions an 'Extended version https://arxiv.org/abs/2412.15315' which refers to an extended version of the paper itself, not a code repository. There is no explicit statement about releasing code or a link to a code repository for the methodology described.
Open Datasets Yes All datasets are available on (Wu et al. 2021) (Liu et al. 2022).
Dataset Splits No The paper describes in-domain and cross-domain experimental settings and mentions forecasting steps (e.g., T {96, 192, 336, 720}) and lookback lengths (e.g., Lft = 96, Lpt = 512). However, it does not provide specific training, validation, or test dataset splits in terms of percentages, counts, or specific predefined split citations.
Hardware Specification Yes All experiments are conducted on a single NVIDIA Tesla V100SXM2-32GB GPU.
Software Dependencies No The paper does not mention any specific software dependencies with version numbers, such as programming languages, libraries, or frameworks.
Experiment Setup Yes Unless otherwise stated, the input sequence length of Drop Patch is set to 512, and the patch length is fixed at 12 following the self-supervised Patch TST (Nie et al. 2022). unless otherwise stated, the drop ratio and mask ratio is 0.6 and 0.4 throughout this paper. We choose to use the Mean Squared Error (MSE) loss to mesure the reconstruction and the ground truth. The lookback length Lft is fixed at 96, which is shorter than the lookback length Lpt = 512 on the pre-training stage.