Robust Temporal Smoothness in Multi-Task Learning

Authors: Menghui Zhou, Yu Zhang, Yun Yang, Tong Liu, Po Yang

AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results on synthetic and real-life datasets demonstrate the effectiveness of our frameworks. Experiments To demonstrate the competitiveness of the proposed approaches, we compare them with Laplacian based temporal similarity (LTS) and fused Lasso based temporal similarity (FTS). The implementation code of all these competitive methods is in the supplementary material. For all the methods, the hyperparameters are selected by grid search and 3-fold cross validation. For each dataset, the experiments on different methods are repeated 5 times by splitting data set randomly, and the mean and standard deviation of the results are reported.
Researcher Affiliation Academia Menghui Zhou1, Yu Zhang2, Yun Yang1, Tong Liu2, Po Yang2 1 Deparment of Software, Yunnan University, Kunming, China 2 Department of Computer Science, Sheffield University, Sheffield, UK mhzcn@mail.ynu.edu.cn, yzhang489@sheffield.ac.uk, yangyun@ynu.edu.cn, {t.liu, po.yang}@sheffield.ac.uk
Pseudocode No The paper describes the optimization algorithm using mathematical formulations and textual explanations but does not include a clearly labeled pseudocode or algorithm block.
Open Source Code Yes For the implementation code and Appendix, please refer to https://github.com/menghuizhou/Ro TS.
Open Datasets Yes Alzheimer s Disease (AD) Dataset: This dataset (Jack Jr et al. 2008) consists of three subsets, including RAVLT, MMSE, and ADAS-Cog, ADAS. National Institute of Health (NIH) in 2003 funded the Alzheimer s Disease Neuroimaging Initiative (ADNI) to facilitate the scientific evaluation of neuroimaging data including magnetic resonance imaging (MRI), and clinical and neuropsychological assessments for predicting the onset and progression of mild cognitive impairment (MRI) and AD. The three data sets RAVLT, MMSE, and ADAS are all from ADNI (Weiner et al. 2017).
Dataset Splits Yes For all the methods, the hyperparameters are selected by grid search and 3-fold cross validation. The training ratio is 0.5, defined as the ratio of the training set over the data set. We conduct experiments on AD datasets with training ratio as 0.2.
Hardware Specification No The paper does not provide any specific details about the hardware used to run the experiments, such as CPU, GPU models, or memory specifications.
Software Dependencies No The paper does not specify any software dependencies with version numbers (e.g., programming languages, libraries, or frameworks).
Experiment Setup Yes The search range of the regularization parameters is [0.1, 1, 10, 50, 100, 200, 500, 1000, 2500, 5000]. The root mean square error (r MSE) is used to evaluate the performance of involved methods as used in multi-task learning literature (Yao, Cao, and Chen 2019). We stop the iterative procedure of the algorithms if the change of the objective values in two consecutive iterations is smaller than 10-4.