Domain Conditioned Adaptation Network
Authors: Shuang Li, Chi Liu, Qiuxia Lin, Binhui Xie, Zhengming Ding, Gao Huang, Jian Tang11386-11393
AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on three crossdomain benchmarks demonstrate the proposed approach outperforms existing methods by a large margin, especially on very tough cross-domain learning tasks. |
| Researcher Affiliation | Collaboration | 1School of Computer Science and Technology, Beijing Institute of Technology, China 2Department of Computer, Information and Technology, Indiana University-Purdue University Indianapolis, USA 3Department of Automation, Tsinghua University, China, 4AI Labs, Didi Chuxing, China 5Department of Electrical Engineering and Computer Science, Syracuse University, USA |
| Pseudocode | No | The paper does not contain a clearly labeled pseudocode or algorithm block. |
| Open Source Code | No | The paper does not provide any explicit statement about making the source code available or include a link to a code repository. |
| Open Datasets | Yes | Office-31 (Saenko et al. 2010) is a popular object dataset with 4110 images and 31 classes under office settings. It consists of three distinct domains: Amazon (A), Webcam (W) and DSLR (D). As (Zhang et al. 2019b), we construct 6 cross-domain tasks: A W, ..., D W. Office-Home (Venkateswara et al. 2017) is a challenging benchmark with totally 15588 images, containing 65 classes from 4 domains: Artistic images (Ar), Clip Art (Cl), Product images (Pr) and Real-World images (Rw). And we build 12 adaptation tasks: Ar Cl, ..., Rw Pr. Domain Net is the largest visual domain adaptation dataset so far, and involves about 0.6 million images with 345 categories that evenly spread in 6 domains: Clipart (clp), Infograph (inf), Painting (pnt), Quickdraw (qdr), Real (rel), Sketch (skt). |
| Dataset Splits | No | Following (Peng et al. 2018), each domain is split into training and test sets. Only training sets of both domains are involved in the training procedure, and the results of target test set are reported. The paper mentions using the 'importance weighted cross-validation method' to select hyperparameters, but does not provide specific details on a separate validation dataset split (e.g., percentages or counts). |
| Hardware Specification | No | The paper does not provide specific hardware details such as GPU models, CPU types, or memory specifications used for running the experiments. |
| Software Dependencies | No | The paper mentions 'Py Torch' as the implementation framework but does not provide specific version numbers for PyTorch or any other software dependencies. |
| Experiment Setup | Yes | We use a small batch of 32 samples per domain... we set the learning rate of the classifier layer to be 10 times that of the other layers, while the domain conditioned feature correction blocks are 1/10 times because of its precision. All the images are cropped to 224 224... We adopt stochastic gradient descent (SGD) with momentum of 0.9 and the learning rate strategy as described in (Ganin and Lempitsky 2015)... The values of coefficient α, β are fixed to 1.5 and 0.1, p is 0.8 chosed from {0.2, 0.4, 0.6, 0.8, 1}. |