UMFC: Unsupervised Multi-Domain Feature Calibration for Vision-Language Models

Authors: Jiachen Liang, RuiBing Hou, Minyang Hu, Hong Chang, Shiguang Shan, Xilin Chen

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate our method on multiple settings including transductive learning and test-time adaptation. Extensive experiments show that our method outperforms CLIP and performs on par with the state-of-the-artsthat need additional annotations or optimization.
Researcher Affiliation Academia Jiachen Liang1,2, Ruibing Hou1 , Minyang Hu1,2, Hong Chang1,2, Shiguang Shan1,2, Xilin Chen1,2 1 Institute of Computing Technology, Chinese Academy of Sciences 2University of Chinese Academy of Sciences
Pseudocode Yes Algorithms are provided in the Appendix C. Algorithm 1 summarizes the proposed UMFC method under Test-Time Adaptation (TTA) setting. Algorithm 2 summarizes the proposed UMFC method under Unsupervised Calibration (UC) / Transductive Learning (TL).
Open Source Code Yes Our code is available at https://github.com/GIT-LJc/UMFC.
Open Datasets Yes Datasets. Our UMFC is training free, which only calibrates the image and text features using Equation 4 and 6. To analyze model s generalization capability, we use two large-scale datasets for evaluation: 1) Domain Net [29]... 2) Image Net Variants composed of several datasets shifted from Image Net, including Image Net-A (IN-A) [13], Image Net-R (IN-R) [12], and Image Net-Sketch (IN-S) [36].
Dataset Splits No To ensure the reliability of the evaluation results, we randomly sample the test data to construct a balanced test set where both domain and category distributions are uniform. While it mentions test set construction, it does not provide details on train/validation/test splits explicitly for all experimental settings. It focuses on unlabeled data.
Hardware Specification Yes All experiments are performed on a Ge Force RTX 3090 Ti GPU.
Software Dependencies No We select CLIP [30] as our pre-trained vision-language model. We use CLIP with Vi T-B/16 [6] as image encoder, and keep the original transformer as the text encoder. The paper mentions the models used but does not provide specific software dependencies with version numbers (e.g., Python, PyTorch, or other libraries).
Experiment Setup Yes The images are resized to 224 224. The hyper-parameter M (cluster number) is set to 6 for Domain Net. By default, the batch size is set to 100.