Tri-Level Navigator: LLM-Empowered Tri-Level Learning for Time Series OOD Generalization

Authors: Chengtao Jian, Kai Yang, Yang Jiao

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on real-world datasets have been conducted to elucidate the effectiveness of the proposed method.
Researcher Affiliation Academia Chengtao Jian Tongji University, Shanghai, China jct@tongji.edu.cn Kai Yang Tongji University, Shanghai, China kaiyang@tongji.edu.cn Yang Jiao Tongji University, Shanghai, China yangjiao@tongji.edu.cn
Pseudocode Yes Algorithm 1 SLA: Stratified Localization Algorithm
Open Source Code No While the data used in our study is publicly available, we are currently unable to provide open access to the code.
Open Datasets Yes HHAR [Blunck et al., 2015], PAMAP [Reiss, 2012], WESAD [Philip Schmidt et al., 2018], SWELL [Koldijk et al., 2014], USC-HAD[Zhang and Sawchuk, 2012] and DSADS [Barshan and Altun, 2013].
Dataset Splits No The paper mentions 'training dataset Dtrain' and 'test dataset Dtest' but does not explicitly specify a validation set or the percentages for all three splits.
Hardware Specification Yes All the methods are implemented with Py Torch[Paszke et al., 2019] version 1.7.1 on an NVIDIA Ge Force RTX 4090 graphics card.
Software Dependencies Yes All the methods are implemented with Py Torch[Paszke et al., 2019] version 1.7.1
Experiment Setup Yes Our baseline experiments were conducted using a network architecture consisting of 10-layers dilated convolutions network. The dilation rate for each layer is set to 2k, where k is the layer number. We used the same kernel size of 3 across all layers. Optimization was performed using the Adam optimizer with a weight decay of 3 10 4. For all baseline experiments, we set the batch size to 256 and the learning rate to 0.002. The training was set to run for a maximum of 50 epochs.