Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
TTE: Two Tokens Are Enough to Improve Parameter-Efficient Tuning
Authors: Jiacheng Ruan, Mingye Xie, Jingsheng Gao, Xian Gao, Suncheng Xiang, Ting Liu, Yuzhuo Fu
AAAI 2025 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments conducted on 24 datasets across image and few-shot classification tasks demonstrate that our fine-tuning framework not only mitigates overfitting but also significantly enhances PET s performance. |
| Researcher Affiliation | Academia | Jiacheng Ruan, Mingye Xie, Jingsheng Gao, Xian Gao, Suncheng Xiang, Ting Liu, Yuzhuo Fu* Shanghai Jiao Tong University, China EMAIL |
| Pseudocode | Yes | The Pytorch-style code for Adaptive Dropout class Adaptive Dropout(nn.Module): def __init__(self, p=0.9): super(Adaptive Dropout, self).__init__() p = torch.log(torch.tensor(p / (1 p))) self.p = nn.Parameter(p) def forward(self, x): if self.training: p = torch.sigmoid(self.p) mask_ratio = (1-p) * torch.ones_like(x) binary_mask = torch.Bernoulli(mask_ratio) return binary_mask * x / (1-p) return x Figure 3: The Pytorch-style code for Adaptive Dropout. |
| Open Source Code | Yes | Code https://github.com/JCruan519/TTE |
| Open Datasets | Yes | We utilize the VTAB-1K benchmark (Zhai et al. 2019) to validate our TTE framework for image classification tasks... Food-101 (Bossard, Guillaumin, and Van Gool 2014), Oxford Pets (Parkhi et al. 2012), Stanford Cars (Krause et al. 2013), Oxford-Flowers102 (Nilsback and Zisserman 2006), and FGVC-Aircraft (Maji et al. 2013) datasets... Vi T-B/16 (Dosovitskiy et al. 2020) model, pretrained on the Image Net-21K dataset (Deng et al. 2009). |
| Dataset Splits | Yes | For the VTAB-1K benchmark s 19 image classification tasks, each task is equipped with only 1,000 training samples and approximately 20,000 test samples... we conduct validation under {1, 2, 4, 8, 16}-shot settings |
| Hardware Specification | Yes | Pytorch (Paszke et al. 2019) and Transformers (Wolf et al. 2020) are utilized to implement experiments on NVIDIA A100 GPUs. |
| Software Dependencies | No | Pytorch (Paszke et al. 2019) and Transformers (Wolf et al. 2020) are utilized to implement experiments on NVIDIA A100 GPUs. (No specific version numbers are provided for Pytorch or Transformers.) |
| Experiment Setup | Yes | For PET methods, unless specifically stated otherwise, we fix the length of the learnable token introduced by VPT (Jia et al. 2022) at 20, set the hidden layer dimensions of S-Ada. (Houlsby et al. 2019) and P-Ada. (Chen et al. 2022) to 4, fix the scaling factor at 0.1, set the rank of Lo RA (Hu et al. 2021) at 4, and the scaling factor at 1. In terms of training configurations, we follow the work of predecessors (Lian et al. 2022; Jie and Deng 2023; Luo et al. 2023a), to ensure fairness and reproducibility. Regarding our TTE framework, to avoid redundancy brought about by further hyperparameter adjustment, we fix temperature α at 0.5, and only allow β to be searched from {0.25, 0.5, 0.75}. |