Visual Representation Learning over Latent Domains

Authors: Lucas Deecke, Timothy Hospedales, Hakan Bilen

ICLR 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments show that this setting is challenging for standard models and existing multi-domain approaches, calling for new customized solutions: a sparse adaptation strategy is formulated which enhances performance by accounting for latent domains in data. We evaluate our proposed methods on three latent domain benchmarks: Office-Home, PACS, and Domain Net.
Researcher Affiliation Academia Lucas Deecke, Timothy Hospedales & Hakan Bilen University of Edinburgh {l.deecke,t.hospedales,h.bilen}@ed.ac.uk
Pseudocode No The paper describes the proposed method (SLA) using mathematical equations and textual descriptions, but does not include structured pseudocode or algorithm blocks.
Open Source Code Yes Code is available at github.com/VICO-UoE/LatentDomainLearning.
Open Datasets Yes We evaluate our proposed methods on three latent domain benchmarks: Office-Home, PACS, and Domain Net (c.f. Fig. 6, which shows example images from these benchmarks). Res Net26 model pretrained on a downsized version of Image Net that was used in previous work by Rebuffiet al. (2018).
Dataset Splits Yes All experiments follow the preprocessing of Rebuffiet al. (2017; 2018), alongside standard augmentations such as normalization, random cropping, etc. This experimental setup is identical to previous work on empirical fairness (Wang et al., 2020; Ramaswamy et al., 2020), which however different from our work focused on learning models that have access to the gender-attribute d. We use the optimization settings introduced in Section 4 for 70 epochs with reductions at epochs 30, 40, and 50, selecting the best model on the validation split. Top-1 validation accuracy on imbalanced CIFAR benchmarks (Buda et al., 2018).
Hardware Specification No The paper does not provide specific hardware details such as GPU models, CPU types, or cloud computing specifications used for running experiments.
Software Dependencies No The paper mentions 'Py Torch (Paszke et al., 2017)' but does not specify a version number for PyTorch or any other software dependencies, which is required for reproducibility.
Experiment Setup Yes Training is carried out for 120 epochs using stochastic gradient descent (momentum parameter of 0.9), batch size of 128, weight decay of 10 4, and an initial learning rate of 0.1 (reduced by 1/10 at epochs 80, 100).