A Bit More Bayesian: Domain-Invariant Learning with Uncertainty
Authors: Zehao Xiao, Jiayi Shen, Xiantong Zhen, Ling Shao, Cees Snoek
ICML 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We empirically demonstrate the effectiveness of our proposal on four widely used cross-domain visual recognition benchmarks. Ablation studies validate the synergistic benefits of our Bayesian treatment when jointly learning domain-invariant representations and classifiers for domain generalization. |
| Researcher Affiliation | Collaboration | 1AIM Lab, University of Amsterdam, The Netherlands 2Inception Institute of Artificial Intelligence, UAE. |
| Pseudocode | No | The paper does not contain any pseudocode or clearly labeled algorithm blocks. |
| Open Source Code | Yes | Source code is publicly available at https://github. com/zzzx1224/A-Bit-More-Bayesian.git. |
| Open Datasets | Yes | We conduct our experiments on four widely used benchmarks for domain generalization and report the mean classification accuracy on target domains. PACS2 (Li et al., 2017) ... Office-Home3 (Venkateswara et al., 2017) ... Rotated MNIST4 and Fashion-MNIST5 are introduced in (Piratla et al., 2020). |
| Dataset Splits | Yes | We select λφ and λψ based on validation set performance and summarize their influence in the supplementary material. The model with the highest validation set accuracy is employed for evaluation on the target domain. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU model, CPU type, memory) used for running the experiments. |
| Software Dependencies | No | The paper mentions using 'Adam optimization' and 'Res Net-18' but does not specify version numbers for these or any other software libraries or frameworks (e.g., PyTorch, TensorFlow). |
| Experiment Setup | Yes | During training we use Adam optimization (Kingma & Ba, 2014) with a learning rate of 0.0001, and train for 10,000 iterations. In each iteration we choose one source domain as the meta-target domain. The batch size is 128. ... We select λφ and λψ based on validation set performance... The optimal values of λφ and λψ are 0.1 and 100. Parameters σ1 and σ2 in (12) are set to 0.1 and 1.5. |