Adaptive Adversarial Multi-task Representation Learning

Authors: Yuren Mao, Weiwei Liu, Xuemin Lin

ICML 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We further conduct extensive experiments to back up our theoretical analysis and validate the superiority of our proposed algorithm.
Researcher Affiliation Academia 1School of Computer Science and Engineering, University of New South Wales, Australia. 2School of Computer Science, Wuhan University, China.
Pseudocode Yes Algorithm 1 Adaptive Adversarial MTRL
Open Source Code Yes The code can be found in the Supplementary Materials.
Open Datasets Yes The training/testing/validation partition is randomly split into 70% training, 10% testing and 20% validation. The training/testing/validation partition is randomly split into 60% training, 20% testing and 20% validation.
Dataset Splits Yes The training/testing/validation partition is randomly split into 70% training, 10% testing and 20% validation. The training/testing/validation partition is randomly split into 60% training, 20% testing and 20% validation.
Hardware Specification No The paper does not provide specific hardware details (exact GPU/CPU models, processor types with speeds, memory amounts, or detailed computer specifications) used for running its experiments.
Software Dependencies No The implementation is based on Py Torch (Paszke et al., 2019).
Experiment Setup Yes We train the deep AAMTRL network model with Algorithm 1 settings λ0 = 1, r0 = 10 and rk+1 = rk + 2; here, R0 is a matrix of ones. We use the Adam optimizer (Kingma & Ba, 2015) and train 600 epochs for sentiment analysis and 1200 epochs for topic classification, The batch size is 256 for both sentiment analysis and topic classification. We use dropout with probability of 0.5 for all task-specific output modules. For all experiments, we search over the set {1e 4, 5e 4, 1e 3, 5e 3, 1e 2, 5e 2} of learning rates and choose the model with the highest validation accuracy.