Adapting to Distribution Shift by Visual Domain Prompt Generation

Authors: Zhixiang Chi, Li Gu, Tao Zhong, Huan Liu, YUANHAO YU, Konstantinos N Plataniotis, Yang Wang

ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments are conducted to validate the domain knowledge extraction. The proposed method outperforms previous work on 5 large-scale benchmarks including WILDS and Domain Net.
Researcher Affiliation Academia Zhixiang Chi1 , Li Gu2 , Tao Zhong1, Huan Liu3, Yuanhao Yu3, Konstantinos N Plataniotis1, Yang Wang2 1 University of Toronto, 2 Concordia University, 3 Mc Master University Q zhixiang.chi@mail.utoronto.ca
Pseudocode Yes Algorithm 1 Training scheme for VDPG
Open Source Code No Our source code will be available upon paper acceptance.
Open Datasets Yes We follow Meta-DMo E and MABN to evaluate VDPG on challenging real-world WILDS (Koh et al., 2021) benchmarks.
Dataset Splits Yes We follow official splits in source and target domains, and official metrics: accuracy, Macro F1, worse-case (WC) accuracy, Pearson correlation (r), and its worst-case. Specifically, for each episode, we first sample one domain Dn s p(s), and then sample two nonoverlapping support set (x S) and query set (x Q, y Q) (L4-5).
Hardware Specification No The paper specifies the model architectures (e.g., Vi T-B/16, Vi T-L/14) but does not provide any specific details about the hardware (GPUs, CPUs, memory) used for running the experiments.
Software Dependencies No The paper mentions using CLIP and Vi T models, but does not provide specific version numbers for any software dependencies, such as deep learning frameworks or libraries.
Experiment Setup Yes We perform training using SGD with a batch size of 64 for 30 epochs. The initial learning rates are set to 3e 3 and 5e 4 with cosine decay for WILDS and Domain Net. The loss weights γ and λ are set to 0.1.