Evidential Neighborhood Contrastive Learning for Universal Domain Adaptation

Authors: Liang Chen, Yihang Lou, Jianzhong He, Tao Bai, Minghua Deng6258-6267

AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on three benchmarks demonstrate that TNT significantly outperforms previous state-of-the-art Uni DA methods.
Researcher Affiliation Collaboration Liang Chen1, Yihang Lou2*, Jianzhong He2, Tao Bai2, Minghua Deng1 1 School of Mathematical Sciences, Peking University 2 Intelligent Vision Dept, Huawei Technologies {clandzyy, dengmh}@pku.edu.cn, {louyihang1, jianzhong.he, baitao13}@huawei.com
Pseudocode No The paper describes its methodology using textual descriptions and mathematical formulations but does not include any explicit pseudocode or algorithm blocks.
Open Source Code No The paper does not provide any explicit statement about releasing source code or a link to a code repository for the described methodology.
Open Datasets Yes We conduct experiments on three benchmark datasets. Office (Saenko et al. 2010) consists of about 4700 images in 31 categories from three domains: Amazon (A), DSLR (D), and Webcam (W). Office Home (Venkateswara et al. 2017) is a larger dataset with 15500 images from 65 categories in four domains: Artistic images (A), Clip-Art images (C), Product images (P), and Real-World images (R). Vis DA (Peng et al. 2017) is a large-scale challenging dataset with 12 categories, with source domain containing about 150K synthetic images (S) and target domain containing 50K real world images (R).
Dataset Splits No The paper mentions 'category split' details are in the supplemental material but does not explicitly provide the training, validation, or test dataset splits (e.g., percentages or counts) within the main text.
Hardware Specification Yes Our implementation is based on Py Torch and we conduct all experiments on one Tesla V100 GPU.
Software Dependencies No The paper mentions 'Py Torch' but does not provide specific version numbers for it or any other software dependencies.
Experiment Setup Yes Our implementation is based on Py Torch and we conduct all experiments on one Tesla V100 GPU. The network backbone is Res Net50 (He et al. 2016) pretrained on Image Net (Deng et al. 2009), and the evidential head consists of two fully-connected layers. In the training phase, we choose the exp function as the evidence function, because we empirically found it to be numerically more stable when training the evidential loss L1. Following previous work (Saito et al. 2020), the batch size is set to 36 and the temperature parameter τ is set as 0.05. The memory bank is updated with momentum γ = 0.5 and the nearest neighbor number k is set to 30 for Office and Office Home and 50 for Vis DA as default. We train our model for 10000 iterations with Nestrov momentum SGD. The initial learning rate is set to 0.001, which is decayed with the same schedule as in previous studies (Long et al. 2018; Saito et al. 2020).