Learning Conflict-Noticed Architecture for Multi-Task Learning
Authors: Zhixiong Yue, Yu Zhang, Jie Liang
AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on computer vision, natural language processing, and reinforcement learning benchmarks demonstrate the effectiveness of the proposed methods. |
| Researcher Affiliation | Academia | Zhixiong Yue1,2, Yu Zhang1,3,*, Jie Liang2 1 Department of Computer Science and Engineering, Southern University of Science and Technology, Shenzhen, China 2 University of Technology Sydney 3 Peng Cheng Laboratory, Shenzhen, China |
| Pseudocode | Yes | Algorithm 1: Conflict-Noticed Architecture Learning |
| Open Source Code | Yes | The code of Co NAL is publicly available.1. 1https://github.com/yuezhixiong/Co NAL |
| Open Datasets | Yes | Experiments on Computer Vision (CV), Natural Language Processing (NLP), and Reinforcement Learning (RL) benchmark datasets demonstrate the effectiveness of the proposed methods. We conduct experiments on four CV benchmark datasets: City Scapes (Cordts et al. 2016), NYUv2 (Silberman et al. 2012), PASCAL-Context (Mottaghi et al. 2014), and Taskonomy (Zamir et al. 2018). [...] MT10 challenge from the Meta World environment (Yu et al. 2020b). [...] Celeb A dataset (Liu et al. 2015). |
| Dataset Splits | Yes | Lval( , ) denotes the total loss on the validation dataset. [...] Input: Dataset Dtr and Dval |
| Hardware Specification | No | The paper mentions running experiments and comparing methods but does not specify any hardware details such as GPU models, CPU types, or memory used for these experiments. |
| Software Dependencies | No | The paper describes its method and experiments but does not provide specific software dependencies with version numbers (e.g., PyTorch 1.x, TensorFlow 2.x). |
| Experiment Setup | Yes | Due to page limit, details on the experimental setup are put in Appendix A.8. For fair comparison, we use the same backbone (with details in Appendix A.8) for all the models in comparison. |