Multi-task Graph Neural Architecture Search with Task-aware Collaboration and Curriculum
Authors: Yijian Qin, Xin Wang, Ziwei Zhang, Hong Chen, Wenwu Zhu
NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on both synthetic and real-world datasets validate the superiority of our proposed MTGC3 model over existing baselines via customizing the optimal architecture for each task and sharing useful information among them. |
| Researcher Affiliation | Academia | Yijian Qin1,2, Xin Wang1,2 , Ziwei Zhang1, Hong Chen1, Wenwu Zhu1,2 1Department of Computer Science and Technology, Tsinghua University, 2BNRist qinyj19@mails.tsinghua.edu.cn, {xin_wang,zwzhang}@tsinghua.edu.cn h-chen20@mails.tsinghua.edu.cn, wwzhu@tsinghua.edu.cn |
| Pseudocode | Yes | Algorithm 1: MTGC3 |
| Open Source Code | Yes | 3https://github.com/THUMNLab/Auto GL-light |
| Open Datasets | Yes | Real-world Datasets. We choose three widely-used multi-task graph classification datasets included OGB [13]: OGBG-Tox21 [14], OGBG-Tox Cast [35], and OGBG-Sider [18]. |
| Dataset Splits | Yes | We choose three widely-used multi-task graph classification datasets included OGB [13]: OGBG-Tox21 [14], OGBG-Tox Cast [35], and OGBG-Sider [18]. |
| Hardware Specification | No | The paper does not provide specific details on the hardware used for experiments, such as GPU or CPU models. |
| Software Dependencies | No | The paper does not list specific software dependencies with their version numbers. |
| Experiment Setup | Yes | Experimental Details. We set the number of layers as 3 for synthetic datasets, and 5 for real-world datasets. For all datasets except Tox Cast, we use the task-separate head. For Tox Cast, we use the cross-mixed head with 16 chunks. We train our models with a batch size of 32 for synthetic datasets and 128 for real-world datasets. We use Adam optimizer with a learning rate of 0.001. |