Adversarial Robustness in Multi-Task Learning: Promises and Illusions

Authors: Salah Ghamizi, Maxime Cordy, Mike Papadakis, Yves Le Traon697-705

AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental we evaluate the design choices that impact the robustness of multi-task deep learning networks. We provide evidence that blindly adding auxiliary tasks, or weighing the tasks provides a false sense of robustness. ... Surprisingly, our experimental study shows that adding more tasks does not consistently increase robustness, and may even have negative effects.
Researcher Affiliation Academia Salah Ghamizi, Maxime Cordy, Mike Papadakis, and Yves Le Traon University of Luxembourg salah.ghamizi@uni.lu, maxime.cordy@uni.lu, michail.papadakis@uni.lu, yves.letraon@uni.lu
Pseudocode Yes We describe full algorithm in Appendix D.
Open Source Code Yes We provide the appendix, all our algorithms, models, and open source-code at https://github.com/yamizi/taskaugment
Open Datasets Yes We use the Taskonomy dataset, an established dataset for multi-task learning (Zamir et al. 2018).
Dataset Splits Yes We use the architectures and training settings of the original Taskonomy paper (Zamir et al. 2018)
Hardware Specification No The paper does not provide specific hardware details such as GPU models, CPU models, or memory specifications used for running the experiments. It only mentions 'Resnet18 encoder' and 'Xception, Wide-Resnet, and Resnet' which are model architectures.
Software Dependencies No The paper does not provide specific version numbers for software dependencies or libraries used for the experiments.
Experiment Setup Yes We use as base setting the l Projected Gradient Descent attack (PGD) (Madry et al. 2017) with 25 steps attacks, a strength of ϵ = 8/255 and a step size α = 2/255. ... We use a uniform weights, Cross-entropy loss for the semantic segmentation task and an L1 loss for the other tasks.