LION: Implicit Vision Prompt Tuning
Authors: Haixin Wang, Jianlong Chang, Yihang Zhai, Xiao Luo, Jinan Sun, Zhouchen Lin, Qi Tian
AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Various experiments have validated that our LION obtains promising performances on a wide range of datasets. |
| Researcher Affiliation | Collaboration | 1National Engineering Research Center for Software Engineering, Peking University, 2Huawei Cloud & AI, 3School of Mathematical Sciences, Peking University, 4 National Key Lab of General AI, School of Intelligence Science and Technology, Peking University, 5 Peng Cheng Laboratory |
| Pseudocode | No | No structured pseudocode or algorithm blocks were found in the paper. |
| Open Source Code | No | The paper does not contain an explicit statement or link indicating that the source code for the described methodology is publicly available. |
| Open Datasets | Yes | CIFAR10 (Krizhevsky, Hinton et al. 2009), CIFAR100 (Krizhevsky, Hinton et al. 2009), Image Net100 (Deng et al. 2009), Flower (Nilsback and Zisserman 2008), Stanford Dogs (Khosla et al. 2011), Stanford Cars (Gebru et al. 2017), Clothing (Tanaka et al. 2018) |
| Dataset Splits | No | The paper specifies training and testing image counts for CIFAR10, CIFAR100, and ImageNet100, but does not explicitly state the validation split or its size for any dataset. While 'validation accuracy' is mentioned, the methodology for creating the validation set is not provided. |
| Hardware Specification | Yes | The whole experiments are implemented on the NVIDIA V100 GPU with Py Torch. |
| Software Dependencies | No | The paper mentions 'Py Torch' but does not specify its version number or any other software dependencies with their versions. |
| Experiment Setup | Yes | In experiments on the datasets above, we utilize the Adam optimizer with a momentum of 0.9, batch size of 64, and learning rate of 1e-5. |