Balanced Open Set Domain Adaptation via Centroid Alignment

Authors: Mengmeng Jing, Jingjing Li, Lei Zhu, Zhengming Ding, Ke Lu, Yang Yang8013-8020

AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results on three OSDA benchmarks verify that our method can significantly outperform the compared methods and reduce the proportion of the unknown samples being misclassified into known classes.
Researcher Affiliation Academia 1 University of Electronic Science and Technology of China 2 Shandong Normal University, 3 Department of Computer Science, Tulane University
Pseudocode Yes Algorithm 1 Unknown Samples Recognition Using EVT
Open Source Code No The paper does not include an explicit statement or a link to a code repository for the described methodology.
Open Datasets Yes Office-31 (Saenko et al. 2010) includes 31 classes from 3 domains: A, W and D. Following (Saito et al. 2018b), we select 10 classes as known and 11 classes as unknown. Vis DA-2017 (Peng et al. 2017) contains 2 domains: Synthetic and Real. Each domain includes 12 classes. Following (Saito et al. 2018b), we take the first 6 classes as known and the remaining as unknown. Image-CLEF1 includes 4 domains: I, C, P and B. Each domain contains 12 classes. We use the first 6 classes in alphabetical order as the known and the rest as the unknown.
Dataset Splits No The paper mentions using "importance-weighted cross-validation" for hyperparameter tuning, but it does not specify explicit dataset splits (e.g., percentages or sample counts) for training, validation, or testing.
Hardware Specification No The paper does not provide specific hardware details (e.g., CPU, GPU models, or memory) used for running its experiments.
Software Dependencies No The paper mentions using 'adam' optimizer but does not provide specific version numbers for software libraries, frameworks, or programming languages used for implementation.
Experiment Setup Yes We adopt adam (JLB 2015) to optimize these models with learning rate 4e-4 for S-VAE models and 1e-3 for the classifier. All the learning rate decreases during the training following an inverse decay scheduling. As for the hyperparameters, we get the optimal hyperparameters through importance-weighted cross-validation (Sugiyama, Krauledat, and M Aˇzller 2007). As our method performs stably under some hyperparameters, we fix the centroid update rate α = 0.2, the tail size η = 0.02, the threshold ζ = 0.98, and the margin angle m = 90 across all the experiments. In addition, for Office-31 and Image-CLEF, we set λ = 1.0, γ = 1.0. For Vis DA-2017, we set λ = 0.5, γ = 0.5.