Which Tasks Should Be Learned Together in Multi-task Learning?

Authors: Trevor Standley, Amir Zamir, Dawn Chen, Leonidas Guibas, Jitendra Malik, Silvio Savarese

ICML 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We perform our study using the Taskonomy dataset (Zamir et al., 2018), which is currently the largest multi-task dataset for computer vision with diverse tasks. and We run our experiments under four settings.
Researcher Affiliation Collaboration 1Stanford University 2Swiss Federal Institute of Technology (EPFL) 3Google Inc. 4The University of California, Berkeley.
Pseudocode No Pseudo code for the algorithm is in the supplemental material, but not provided directly within the main paper.
Open Source Code Yes (pseudo code for the algorithm is in the supplemental material, and our implementation is on Git Hub)
Open Datasets Yes We perform our study using the Taskonomy dataset (Zamir et al., 2018), which is currently the largest multi-task dataset for computer vision with diverse tasks.
Dataset Splits Yes The dataset has about 4 million examples, which we divided into about 3.9 million training instances (200k for Setting 3), about 50k validation instances, and about 50k test instances.
Hardware Specification No No specific hardware details (like GPU models, CPU models, or cloud instances) are provided for the experiments.
Software Dependencies No The paper mentions 'Py Torch (Paszke et al., 2017) with Apex for fp16 acceleration (Micikevicius et al., 2017)' but does not specify exact version numbers for these software dependencies.
Experiment Setup Yes The training loss we used was the unweighted mean of the losses for the included tasks. Networks were trained with an initial learning rate of 0.1, which was reduced by half every time the training loss stopped decreasing. Networks were trained until their validation loss stopped improving.