Res-Tuning: A Flexible and Efficient Tuning Paradigm via Unbinding Tuner from Backbone
Authors: Zeyinzi Jiang, Chaojie Mao, Ziyuan Huang, Ao Ma, Yiliang Lv, Yujun Shen, Deli Zhao, Jingren Zhou
NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on both discriminative and generative tasks demonstrate the superiority of our method over existing alternatives from the perspectives of efficacy and efficiency. |
| Researcher Affiliation | Collaboration | 1Alibaba Group 2National University of Singapore 3Ant Group |
| Pseudocode | No | The paper includes equations and architectural diagrams but does not contain any structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | Project page: https://res-tuning.github.io/. |
| Open Datasets | Yes | For most experiments, we adopt Vi T-B/16 [13] pre-trained on Image Net-21K [11] as the backbone model, following VPT [28]. ... We evaluate the text-to-image generation performance on COCO2017 dataset [43]. ... Table 8: Datasets used for generative tasks. ... Table 9: Datasets used for discriminative tasks. |
| Dataset Splits | No | The paper mentions using 'validation set' for evaluation (e.g., 'combine the validation set of 19 tasks in VTAB-1K' or 'sample 10k captions from the validation set'), but it does not consistently provide specific train/validation/test dataset splits with percentages or counts for all experiments. |
| Hardware Specification | Yes | Device A100 1 (for discriminative tasks), Device A100 8 (for generative tasks) |
| Software Dependencies | Yes | Library Diffusers 2 |
| Experiment Setup | Yes | Table 10: Hyperparameter selection for discriminative tasks. ... Table 11: Hyperparameter selection for generative tasks. |