Relational Multi-Task Learning: Modeling Relations between Data and Tasks

Authors: Kaidi Cao, Jiaxuan You, Jure Leskovec

ICLR 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate Meta Link on 6 benchmark datasets in both biochemical and vision domains. Experiments demonstrate that Meta Link can successfully utilize the relations among different tasks, outperforming the state-of-the-art methods under the proposed relational multi-task learning setting, with up to 27% improvement in ROC AUC.
Researcher Affiliation Academia Kaidi Cao Jiaxuan You Jure Leskovec Department of Computer Science, Stanford University {kaidicao, jiaxuan, jure}@cs.stanford.edu
Pseudocode Yes Algorithm 1 Meta Link Training in Relational Meta Setting
Open Source Code Yes Source code is available at https://github.com/snap-stanford/Graph Gym
Open Datasets Yes Tox21 (Huang et al., 2016), Sider (Kuhn et al., 2016), Tox Cast (Richard et al., 2016), and MS-COCO (Lin et al., 2014)
Dataset Splits Yes We search over the number of layers of [2, 3, 4, 5], and report the test set performance when the best validation set performance is reached.
Hardware Specification Yes We use one NVIDIA RTX 8000 GPU for each experiment and the most time-consuming one (MS-COCO) takes less than 24 hours.
Software Dependencies No The paper mentions 'Py Torch' but does not specify a version number for it or other software dependencies.
Experiment Setup Yes We use Adam optimizer, with initial learning of 0.001 and cosine learning rate scheduler. The model is trained with a batch size of 128 for 50 epochs.