Label-Noise Robust Domain Adaptation

Authors: Xiyu Yu, Tongliang Liu, Mingming Gong, Kun Zhang, Kayhan Batmanghelich, Dacheng Tao

ICML 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results on both synthetic and real-world data verify the effectiveness of the proposed method.
Researcher Affiliation Collaboration 1Department of Computer Vision Technology (VIS), Baidu Incorporation 2UBTECH Sydney AI Centre, The University of Sydney 3School of Mathematics and Statistics, University of Melbourne 4Department of Biomedical Informatics, University of Pittsburgh 5Department of Philosophy, Carnegie Mellon University.
Pseudocode No The paper does not contain any clearly labeled pseudocode or algorithm blocks.
Open Source Code No The paper does not provide any concrete statement or link regarding the availability of its source code.
Open Datasets Yes MNIST-USPS. USPS dataset is a handwritten digit dataset... MNIST shares the same 10 classes of digits... VLCS. VLCS dataset (Torralba & Efros, 2011) consists of the images from five common classes... Pascal VOC 2007 (V), Label Me (L), Caltech (C), and SUN09 (S), respectively.
Dataset Splits Yes After the noisy data are obtained, we leave 10 percent of source data as validation set.
Hardware Specification Yes We gratefully acknowledge the support of NVIDIA Corporation with the donation of the Titan X Pascal GPU used for this research. We were also grateful for the computational resources provided by Pittsburgh Super Computing grant number TG-ASC170024.
Software Dependencies No The paper mentions software like Caffe and AlexNet for training models, but it does not specify version numbers for general software dependencies or libraries.
Experiment Setup Yes The batch sizes for both source and target data are set to 100. The initial learning rate r0 = 0.01 and is decayed exponentially according to r0(1 + 0.0001t) 0.75, where t is the index of current iteration.