Domain Generalization under Conditional and Label Shifts via Variational Bayesian Inference
Authors: Xiaofeng Liu, Bo Hu, Linghao Jin, Xu Han, Fangxu Xing, Jinsong Ouyang, Jun Lu, Georges El Fakhri, Jonghye Woo
IJCAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on various benchmarks demonstrate that our framework is robust to the label shift and the crossdomain accuracy is significantly improved, thereby achieving superior performance over the conventional IFL counterparts. |
| Researcher Affiliation | Academia | 1Dept. of Radiology, Massachusetts General Hospital and Harvard Medical School, Boston, MA, USA 2Beth Israel Deaconess Medical Center and Harvard Medical School, Boston, MA, USA 3National University of Singapore, Singapore 4Johns Hopkins University, Baltimore, MD, USA |
| Pseudocode | Yes | Algorithm 1 VB inference under conditional and label shifts |
| Open Source Code | No | The paper does not provide any explicit statement about releasing open-source code or a link to a code repository. |
| Open Datasets | Yes | VLCS [Ghifary et al., 2017] contains images from four different datasets including PASCAL VOC2007 (V), Label Me (L), Caltech-101 (C), and SUN09 (S). ... publicly available pre-extracted De CAF6 features (4,096-dim vector) [Donahue et al., 2014]. The object recognition benchmark PACS [Li et al., 2017] |
| Dataset Splits | Yes | We follow the previous works to exploit the publicly available pre-extracted De CAF6 features (4,096-dim vector) [Donahue et al., 2014] for leaveone-domain-out validation by randomly splitting each domain into 70% training and 30% testing. |
| Hardware Specification | No | The paper states 'We implement our methods using Py Torch' but does not specify any hardware components like GPU models, CPU types, or memory. |
| Software Dependencies | No | The paper mentions 'We implement our methods using Py Torch', but it does not specify any version numbers for PyTorch or other software libraries. |
| Experiment Setup | Yes | We implement our methods using Py Torch, we empirically set α = 0.5, β = 1, θ = 0.1 and t = 100 via grid searching. ... The model is trained for 30 epochs via the mini-batch stochastic gradient descent (SGD) with a batch size of 128, a momentum of 0.9, and a weight decay of 5e 4. The initial learning rate is set to 1e 3, which is scaled by a factor of 0.1 after 80% of the epochs. |