Multi-Task Model and Feature Joint Learning

Authors: Ya Li, Xinmei Tian, Tongliang Liu, Dacheng Tao

IJCAI 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments are conducted on several real world multi-task learning datasets. All results demonstrate the effectiveness of our multitask model and feature joint learning method.
Researcher Affiliation Academia Ya Li, Xinmei Tian University of Science and Technology of China muziyiye@mail.ustc.edu.cn, xinmei@ustc.edu.cn Tongliang Liu, Dacheng Tao University of Technology, Sydney tliang.liu@gmail.com, dacheng.tao@uts.edu.au
Pseudocode Yes Algorithm 1 Multi-task model and feature joint learning
Open Source Code No The paper does not provide any explicit statements or links indicating that the source code for the described methodology is publicly available.
Open Datasets Yes In this section, we present extensive experiments conducted on several real-world datasets including School, SARCOS, and Isolet. These datasets have been wildly used for evaluation in previous multi-task learning works, for example in [Argyriou et al., 2008; Gong et al., 2012b; Chen et al., 2011; Kang et al., 2011; Gong et al., 2012a].
Dataset Splits Yes We randomly select 10%, 20% and 30% of the examples in each respective task as a training set, and the remaining examples are used for testing.We randomly select 50, 100 and 150 examples to form three separate training sets respectively, and randomly select 5000 examples as test sets.We randomly select 15%, 20%, 25% of the samples to form three training sets and use the rest of the samples as test sets.
Hardware Specification No The paper does not specify any hardware details (e.g., CPU, GPU models, memory, or specific cloud instances) used for running the experiments.
Software Dependencies No The paper does not specify any software dependencies with version numbers.
Experiment Setup No The parameters of all methods are tuned via cross-validation on the training set. For Isolet, 'We first preprocess the data with PCA by reducing the dimensionality to 100'. However, specific hyperparameters or detailed training configurations (e.g., learning rate, batch size, optimizer) are not provided.