Improving Domain-Specific Classification by Collaborative Learning with Adaptation Networks

Authors: Si Wu, Jian Zhong, Wenming Cao, Rui Li, Zhiwen Yu, Hau-San Wong5450-5457

AAAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We perform extensive experiments on multiple digit and object datasets, and the effectiveness and superiority of the proposed approach is presented and verified on multiple visual adaptation benchmarks, e.g., we improve the state-of-the-art on the task of MNIST SVHN from 76.5% to 84.9% without specific augmentation.
Researcher Affiliation Academia 1School of Computer Science and Engineering, South China University of Technology, Guangzhou, Guangdong, China 2Department of Computer Science, City University of Hong Kong, Kowloon, Hong Kong cswusi@scut.edu.cn, cszj@mail.scut.edu.cn, {wenmincao2-c, ruili52-c}@my.cityu.edu.hk, zhwyu@scut.edu.cn, cshswong@cityu.edu.hk
Pseudocode Yes Algorithm 1 Pseudo-code of collaborative learning between adaptation and target-specific networks.
Open Source Code No The paper does not contain any explicit statements about releasing source code or provide a link to a code repository.
Open Datasets Yes digit datasets: MNIST (Le Cun et al. 1998), USPS (Hull 1994), Syn-Digits (Ganin and Lempitsky 2015) and SVHN (Netzer et al. 2011), the object datasets: CIFAR-10 (Krizhevsky and Hinton 2009) and STL-10 (Coates, Lee, and Ng 2011), and the Vis DA-2017 dataset (Peng et al. 2017).
Dataset Splits Yes We use the standard evaluation protocol (Shu et al. 2018) for all the tasks...
Hardware Specification No The paper mentions using a 'relatively small CNN architecture' and 'Res Net-50' for different datasets, but it does not specify any details about the hardware (e.g., CPU, GPU models, memory) used for running the experiments.
Software Dependencies No The paper mentions using the 'ADAM optimization method' but does not provide any specific software names or version numbers (e.g., Python, PyTorch, TensorFlow versions, or other libraries with their versions).
Experiment Setup Yes In our collaborative learning framework, the adaptation and target-specific networks have the same architecture but with different initializations and dropout. At each training step, two mini-batches of source and target instances are drawn from their respective datasets. We adopt the ADAM optimization method (Kingma and Ba 2015) to update the networks. Due to the high resolution, we reduce the batch size to 44.