Learning Multi-Task Sparse Representation Based on Fisher Information

Authors: Yayu Zhang, Yuhua Qian, Guoshuai Ma, Keyin Zheng, Guoqing Liu, Qingfu Zhang

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results show that, comparing with other methods, our proposed method can improve the performance for all tasks, and has high sparsity in multi-task learning. Experimental Studies This section presents a comparative analysis of the performance of FS with related works. The experimental results are presented in Table 1 and Table 2.
Researcher Affiliation Academia 1 Institute of Big Data Science and Industry, Shanxi University, Taiyuan 030006, China 2 School of Computer Science and Technology, North University of China, Taiyuan, Shanxi, 030051, China. 3 Department of Computer Science, City University of Hong Kong, Hong Kong, China 4 The City University of Hong Kong Shenzhen Research Institute, Shenzhen, China
Pseudocode Yes Pseudo-code are shown in algorithm 1 to algorithm 2. Algorithm 1: FSMTL Algorithm Framework. Algorithm 2: Updating Sparse Variable Set S.
Open Source Code No The paper does not provide an explicit statement or link indicating that the source code for the methodology is openly available.
Open Datasets Yes This paper conducts experiments on three multi-task datasets: DKL-mnist, Celeb A, and City Scapes.
Dataset Splits No The paper does not explicitly provide details about training/validation/test dataset splits, such as percentages or sample counts for a validation set.
Hardware Specification No The paper does not explicitly describe the specific hardware (e.g., GPU/CPU models, memory) used to run the experiments.
Software Dependencies No The paper does not provide a reproducible description of ancillary software dependencies with specific version numbers.
Experiment Setup No The paper has a section 'Optimization and Implementation Detail' which discusses updating parameters, but it does not specify concrete hyperparameters like learning rate, batch size, number of epochs, or optimizer settings for the experimental setup.