Transferable Attention for Domain Adaptation
Authors: Ximei Wang, Liang Li, Weirui Ye, Mingsheng Long, Jianmin Wang5345-5352
AAAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments validate that our proposed models exceed state of the art results on standard domain adaptation datasets. |
| Researcher Affiliation | Academia | Ximei Wang, Liang Li, Weirui Ye, Mingsheng Long,B Jianmin Wang School of Software, Tsinghua University, China KLiss, MOE; BNRist; Research Center for Big Data, Tsinghua University, China {wxm17,liliang17,ywr16}@mails.tsinghua.edu.cn {mingsheng,jimwang}@tsinghua.edu.cn |
| Pseudocode | No | The paper includes mathematical formulations and a diagram (Figure 1), but no explicit pseudocode or algorithm blocks. |
| Open Source Code | Yes | Code and datasets will be available at github.com/thuml. |
| Open Datasets | Yes | Office-31 (Saenko et al. 2010), a standard benchmark for visual domain adaptation, contains 4,652 images and 31 categories from three distinct domains: Amazon (A), which contains images downloaded from amazon.com, Webcam (W) and DSLR (D). [...] Office-Home (Venkateswara et al. 2017) is a more challenging dataset for domain adaptation evaluation. |
| Dataset Splits | No | The paper mentions following 'standard evaluation protocols' and 'progressive training strategies' but does not provide specific percentages or counts for training, validation, and test splits needed for reproduction. |
| Hardware Specification | No | The paper does not provide specific details about the hardware used, such as GPU models, CPU types, or memory specifications. |
| Software Dependencies | No | Our methods were implemented based on the PyTorch, and Res Net-50 (He et al. 2016) models pretrained on the Image Net dataset (Russakovsky et al. 2014). The paper mentions 'PyTorch' but does not specify its version number or versions of other key software dependencies. |
| Experiment Setup | Yes | We set λ = 1.0 and γ = 0.1 throughout all experiments. Our methods were implemented based on the PyTorch, and Res Net-50 (He et al. 2016) models pretrained on the Image Net dataset (Russakovsky et al. 2014). We fine-tune all convolutional and pooling layers and apply back-propagation to train the classifier layer and all domain discriminators. Whatever module trained from scratch, its learning rate was set to be 10 times that of the lower layers. We adopt mini-batch stochastic gradient descent (SGD) with momentum of 0.95 using the learning rate and progressive training strategies as in (Ganin and Lempitsky 2015). |