Maximum Roaming Multi-Task Learning

Authors: Lucas Pascal, Pietro Michiardi, Xavier Bost, Benoit Huet, Maria A. Zuluaga9331-9341

AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We study the properties of our method through experiments on a variety of visual multi-task data sets. Experimental results suggest that the regularization brought by roaming has more impact on performance than usual partitioning optimization strategies. The overall method is flexible, easily applicable, provides superior regularization and consistently achieves improved performances compared to recent multi-task learning formulations.
Researcher Affiliation Collaboration 1EURECOM, France 2Orkis, France 3Median Technologies, France
Pseudocode No The paper describes its algorithm and update rules using mathematical definitions (e.g., Definition 1) and prose, but it does not include a formal pseudocode block or algorithm listing.
Open Source Code Yes All code, data and experiments are available on Git Hub 1. 1https://github.com/lucaspascal/Maximum-Roaming-Mutli Task-Learning
Open Datasets Yes We use three publicly available datasets in our experiments: Celeb-A, Cityscapes, NYUv2. The Cityscapes dataset (Cordts et al. 2016) contains 5000 annotated street-view images with pixel-level annotations from a car point of view. The NYUv2 dataset (Silberman et al. 2012) is a challenging dataset containing 1449 indoor images recorded over 464 different scenes from Microsoft Kinect camera.
Dataset Splits Yes The reported results are evaluated validation split provided in the official release of the dataset (Liu et al. 2015). The reported results are evaluated on the validation split furnished by (Liu, Johns, and Davison 2019).
Hardware Specification No The paper does not provide specific hardware details such as GPU models, CPU types, or memory specifications used for running the experiments.
Software Dependencies No The paper mentions using 'Adam optimizer' and 'Res Net-18' but does not specify any software dependencies with version numbers (e.g., 'PyTorch 1.x', 'TensorFlow 2.x', 'Python 3.x').
Experiment Setup Yes All models are optimized with Adam optimizer (Kingma and Ba 2017) with learning rate 10e 4. We use a batch size of 256, and all input images are resized to (64 64 3). For Cityscapes, we use a batch size of 8, and the input samples are resized to 128 256. For NYUv2, we use a batch size of 2, and the input samples are here resized to 288 384. The parameter α in Grad Norm (Chen et al. 2018) has been optimized in the set of values {0.5, 1, 1.5}.