Learning Multi-Level Task Groups in Multi-Task Learning
Authors: Lei Han, Yu Zhang
AAAI 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We experiment our approach on both synthetic and real-world datasets, showing competitive performance over state-of-the-art MTL methods. |
| Researcher Affiliation | Academia | Lei Han1 and Yu Zhang1,2 1Department of Computer Science, Hong Kong Baptist University, Hong Kong 2The Institute of Research and Continuing Education, Hong Kong Baptist University (Shenzhen) |
| Pseudocode | Yes | Algorithm 1 The Bottom-Up Iterative Scheme for Problem (3). |
| Open Source Code | No | The paper does not contain any explicit statement or link indicating that the source code for the methodology is openly available. |
| Open Datasets | Yes | We report results on microarray data (Wille et al. 2004). |
| Dataset Splits | Yes | We perform 10 random splits, each of which uses R%, (80-R)%, and 20% samples for training, testing and validation separately with R as the training ratio. |
| Hardware Specification | No | No specific hardware details (like GPU or CPU models, or memory specifications) used for running experiments are provided in the paper. |
| Software Dependencies | No | The paper does not provide specific software dependencies with version numbers (e.g., library names like PyTorch or scikit-learn with their versions). |
| Experiment Setup | Yes | We set ε0 = 0.6 in the experiments, which shows a better discrimination. |