Active Learning for Domain Adaptation: An Energy-Based Approach

Authors: Binhui Xie, Longhui Yuan, Shuang Li, Chi Harold Liu, Xinjing Cheng, Guoren Wang8708-8716

AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Through extensive experiments, we show that EADA surpasses state-of-the-art methods on well-known challenging benchmarks with substantial improvements, making it a useful option in the open world.
Researcher Affiliation Collaboration 1School of Computer Science and Technology, Beijing Institute of Technology, Beijing, China 2School of Software, BNRist, Tsinghua University, Beijing, China 3Inceptio Technology, Shanghai, China
Pseudocode Yes Algorithm 1: EADA algorithm
Open Source Code Yes Code is available at https://github.com/BIT-DA/EADA.
Open Datasets Yes Vis DA-2017 (Peng et al. 2017), Office Home (Venkateswara et al. 2017) and Office-31 (Saenko et al. 2010), as well as a challenging semantic segmentation task, i.e., GTAV (Richter et al. 2016) to Cityscapes (Cordts et al. 2016).
Dataset Splits No The paper mentions varying percentages for active learning budget (e.g., '5% target samples as the labeling budget', 'from 0% to 20%'), and refers to 'standard protocols' for dataset usage, but does not explicitly provide the specific train/validation/test dataset splits (e.g., percentages or sample counts) for reproducibility.
Hardware Specification No The paper does not provide specific hardware details such as GPU or CPU models used for running the experiments.
Software Dependencies No All methods are implemented based on Py Torch, employing Res Net (He et al. 2016) models pretrained on Image Net (Krizhevsky, Sutskever, and Hinton 2012b). (This mentions PyTorch but no version number).
Experiment Setup Yes To this end, we utilize a commonly used loss in EBMs, i.e., the negative loglikelihood loss that comes from probabilistic modeling to train a model for classification, and it can be formulated as Lnll(x, y; θ) = E(x, y; θ) + 1 c Y exp ( τE(x, c; θ)) , where τ (τ > 0) is the reverse temperature and a low value corresponds to smooth partition of energy over the space Y. For simplicity, we fix τ=1... Lfea(x; θ) = max (0, F(x; θ) ) , where = Ex SF(x; θ) is the average value... Overall, the full learning objective is given by: min θ E(x,y) S Tl Lnll(x, y; θ) + γEx Tu Lfea(x; θ)... Algorithm 1: EADA algorithm 1: Input: Labeled source data S, unlabeled target data Tu and labeled target set Tl = , maximum epoch M, selection rounds R, selection ratios α1 , α2 2: for m = 1 to M do... We follow the standard protocols as (Su et al. 2020; Fu et al. 2021). Also, Table 3 shows 'Effect of selection ratios. α1 / α2 (%) 10 / 10 25 / 4 50 / 2 75 / 1.3 100 / 1 1 / 100'.