Multi-Task Personalized Learning with Sparse Network Lasso

Authors: Jiankun Wang, Lu Sun

IJCAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on various synthetic and real-world datasets demonstrate its robustness and effectiveness.Empirical results on both synthetic and real-world datasets demonstrate the superiority of MTPL.
Researcher Affiliation Academia Jiankun Wang and Lu Sun School of Information Science and Technology Shanghai Tech University, Shanghai, China {wangjk, sunlu1}@shanghaitech.edu.cn
Pseudocode No The paper describes the optimization algorithm and update procedures but does not contain a structured pseudocode or algorithm block, nor is there a section explicitly labeled 'Algorithm' or 'Pseudocode'.
Open Source Code Yes We provide the supplementary material of MTPL at: https://github.com/JiankunWang912/MTPL. and We provide the MATLAB code of MTPL at: https://github.com/JiankunWang912/MTPL.
Open Datasets Yes We conduct experiments on six real-world multi-task datasets: School2, SARCOS3, Sales4, Parkinsons4, Computer5 and Isolet6. Table 1 summarizes their statistics. Details of the datasets are provided in the supplement. 2https://github.com/jiayuzhou/MALSAR/tree/master/data 3http://www.gaussianprocess.org/gpml/data 4https://archive.ics.uci.edu/ml/datasets.php 5https://github.com/probml/pmtk3/tree/master/data 6http://www.cad.zju.edu.cn/home/dengcai/Data/MLData.html
Dataset Splits Yes For evaluation, we randomly select 60%, 20% and 20% of total samples for training, testing and validation, respectively.
Hardware Specification No The paper does not provide any specific hardware details (e.g., exact GPU/CPU models, processor types with speeds, memory amounts, or detailed computer specifications) used for running its experiments.
Software Dependencies No The paper mentions providing 'MATLAB code' but does not specify any exact software versions for MATLAB itself or any other key software components, libraries, or solvers used for the experiments.
Experiment Setup Yes The number K of latent bases in GBDSP, VSTG, FORMULA and MTPL is selected from {3, 5, 7, 9, 11}. The value k of similarity function used in Network Lasso, Localized Lasso and MTPL is fixed to be 5. The value k of k-support norm in VSTG is selected from {1, 2, 3}. The number of transfer groups in GBDSP is selected from {3, 5, 7, 9, 11}. The search grid for the other hyper-parameters is set as {2^10, 2^8, ..., 2^8, 2^10}. For each iterative algorithm, we terminate it once the relative change of its objective is below 10^-5, and set the maximum number of iterations as 1000.