Adversarial Multiple Source Domain Adaptation

Authors: Han Zhao, Shanghang Zhang, Guanhang Wu, José M. F. Moura, Joao P. Costeira, Geoffrey J. Gordon

NeurIPS 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental To demonstrate the effectiveness of MDAN, we conduct extensive experiments showing superior adaptation performance on both classification and regression problems: sentiment analysis, digit classification, and vehicle counting.
Researcher Affiliation Academia Carnegie Mellon University IST, Universidade de Lisboa {hzhao1,shanghaz,guanhanw,moura,ggordon}@andrew.cmu.edu, jpc@isr.ist.utl.pt
Pseudocode No The paper describes the algorithm in text and provides a network architecture diagram (Figure 1), but it does not include structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide any explicit statement about making the source code available, nor does it include a link to a code repository.
Open Datasets Yes We evaluate both hard and soft MDAN and compare them with state-of-the-art methods on three real-world datasets: the Amazon benchmark dataset [11] for sentiment analysis, a digit classification task that includes 4 datasets: MNIST [29], MNIST-M [17], SVHN [37], and Synth Digits [17], and a public, large-scale image dataset on vehicle counting from multiple city cameras [52].
Dataset Splits No For Amazon Reviews: 'Each source domain has 2000 labeled examples, and the target test set has 3000 to 6000 examples. During training, we randomly sample the same number of unlabeled target examples as the source examples in each mini-batch.' For Digits: 'Each source domain has 20, 000 labeled images and the target test set has 9, 000 examples.' While training and testing data are specified, there is no explicit mention of a separate validation set or its size/proportion.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, processor types, memory amounts, or cloud instance types) used for running its experiments.
Software Dependencies No The paper mentions various methods (e.g., DANN, ADDA, MTAE) and uses neural networks, implying software dependencies, but it does not specify any particular software names with version numbers (e.g., 'PyTorch 1.9', 'TensorFlow 2.x').
Experiment Setup No Due to space limit, details about network architecture and training parameters of proposed and baseline methods, and detailed dataset description are described in appendix.