Deep Multi-Task Learning with Adversarial-and-Cooperative Nets

Authors: Pei Yang, Qi Tan, Jieping Ye, Hanghang Tong, Jingrui He

IJCAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The experimental results show that our proposed method significantly outperforms the state-of-the-art algorithms on the benchmark datasets in both multi-task learning and semi-supervised domain adaptation scenarios.
Researcher Affiliation Academia Pei Yang1,2 , Qi Tan3 , Jieping Ye4 , Hanghang Tong2 and Jingrui He2 1South China University of Technology 2Arizona State University 3South China Normal University 4University of Michigan
Pseudocode Yes Algorithm 1 The TACO Algorithm
Open Source Code No The paper does not provide any links to source code or explicitly state that the code for the described methodology is available.
Open Datasets Yes The Office-Home1 [Venkateswara et al., 2017] dataset... 1http://hemanthdv.org/Office Home-Dataset/ The Office-312 [Saenko et al., 2010] dataset... 2https://people.eecs.berkeley.edu/ jhoffman/domainadapt/ The Office-Caltech dataset consists of... 3http://www.vision.caltech.edu/Image Datasets/Caltech256/
Dataset Splits Yes We follow the standard protocol [Zhang and Yeung, 2010; Long et al., 2017a] for multi-task learning and randomly select 5%, 10%, and 20% samples from each task as trainset and use the rest as testset, respectively. A half of trainset is randomly chosen to select the optimal parameters.
Hardware Specification No The paper mentions using Alex Net and VGGnet as base networks, and being pre-trained on Image Net, but it does not specify the hardware (e.g., GPU models, CPU, memory) used for conducting their experiments.
Software Dependencies No The TACO algorithm is implemented using the Caffe framework [Jia et al., 2014]. No specific version number for Caffe or other software dependencies is provided.
Experiment Setup Yes The initial learning rate is set to 0.001, and momentum is 0.9. The training iteration is set as τmax = 1000, and batch size b = 20. We empirically set the parameter α = 0.1.