Joint Data-Task Generation for Auxiliary Learning
Authors: Hong Chen, Xin Wang, Yuwei Zhou, Yijian Qin, Chaoyu Guan, Wenwu Zhu
NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments show that our proposed DTG-Aux L framework consistently outperforms existing methods in various auxiliary learning scenarios, particularly when the manually collected auxiliary data and tasks are unhelpful. |
| Researcher Affiliation | Academia | Hong Chen1, Xin Wang1,2 , Yuwei Zhou1, Yijian Qin1, Chaoyu Guan1, Wenwu Zhu1,2 1Department of Computer Science and Technology, Tsinghua University 2Beijing National Research Center for Information Science and Technology, Tsinghua |
| Pseudocode | Yes | We summarize the complete algorithm in Appendix 2 |
| Open Source Code | No | The paper does not contain any explicit statements about releasing source code or links to a code repository. |
| Open Datasets | Yes | We conduct our experiments on two scenarios... (i) CUB [25]: ... (ii) CIFAR100 [26]: ... We choose the widely used Amazon Toys and Movies [27] datasets... (i) CIFAR10-100... (ii) Pet-CUB... on the Pet [28] dataset |
| Dataset Splits | No | There is also a validation dataset Dv which is used to evaluate the model performance on the primary task. The paper mentions the use of a validation set but does not provide specific split percentages or sample counts for reproduction. |
| Hardware Specification | No | The paper does not specify any particular hardware (e.g., CPU, GPU models, memory) used for running the experiments, only model architectures and datasets. |
| Software Dependencies | No | The paper mentions software components like 'Res Net18' and 'Auto INT' and 'MLP' but does not specify version numbers for any programming languages, libraries, or frameworks used for implementation. |
| Experiment Setup | Yes | In the generator, the embedding dimension mn is searched from {32, 64}, and the layer number of the MLP is searched from {2, 3, 4}. For the head of each task, we adopt Multi-Layer Perceptron(MLP) whose layer is searched from {1, 2}. N is fixed to 3 in all our experiments. |