Candidate Pseudolabel Learning: Enhancing Vision-Language Models by Prompt Tuning with Unlabeled Data
Authors: Jiahan Zhang, Qi Wei, Feng Liu, Lei Feng
ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on nine benchmark datasets with three learning paradigms demonstrate the effectiveness of our method. |
| Researcher Affiliation | Academia | 1 Singapore University of Technology and Design, Singapore 2 Nanyang Technological University, Singapore 3 University of Melbourne, Australia. |
| Pseudocode | Yes | Algorithm 1 Top-K Selection Process in Each Iteration |
| Open Source Code | Yes | Our code can be found here. |
| Open Datasets | Yes | We conduct an extensive evaluation of our method on nine classification datasets from diverse domains, including FGVC-Aircraft (Maji et al., 2013), Euro SAT (Helber et al., 2019), CUB (Wah et al., 2011), Flowers102 (Nilsback & Zisserman, 2008), RESISC45 (Cheng et al., 2017), DTD (Cimpoi et al., 2014), CALTECH-101 (Fei-Fei et al., 2004), UCF-101 (Soomro et al., 2012), and CIFAR-100 (Krizhevsky et al., 2009). |
| Dataset Splits | No | The paper provides 'Training set size' and 'Testing set size' in Table 8 for various datasets. For Semi-Supervised Learning, it mentions using 'two labeled samples per class' but does not explicitly define a separate 'validation set' with specific percentages or counts for hyperparameter tuning. |
| Hardware Specification | No | The paper does not provide specific hardware details such as GPU or CPU models, memory, or cloud instance types used for running the experiments. |
| Software Dependencies | No | The paper mentions 'Optimizer SGD' and 'Network Vi T-B / 32' but does not provide specific software dependencies with version numbers (e.g., Python, PyTorch, CUDA versions). |
| Experiment Setup | Yes | Table 8: Detailed settings for experiments in Section 4. Training procedure: Network Vi T-B / 32, Batch size 64, Epoch 50 where first two epochs are set for warmup, Optimizer SGD, Momentum 0.9, Learning rate (LR) 0.02, Weight decay 5e-2, LR scheduler Cosine Annealing LR. Hyperparameters: α in intra-instance label selection (e.g., 0.60 for Flowers102), β in inter-instance label selection (e.g., 0.99 for Flowers102). |