Adversarial Training Based Multi-Source Unsupervised Domain Adaptation for Sentiment Analysis
Authors: Yong Dai, Jian Liu, Xiancong Ren, Zenglin Xu7618-7625
AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results on two SA datasets demonstrate the promising performance of our frameworks, which outperforms unsupervised state-of-the-art competitors. |
| Researcher Affiliation | Academia | 1SMILE Lab, School of Computer Science and Engineering, University of Electronic Science and Technology of China Chengdu, Sichuan, China 2Center for Artificial Intelligence, Peng Cheng Laboratory, Shenzhen, Guangdong, China |
| Pseudocode | Yes | Algorithm 1 Weighting Scheme based Unsupervised Domain Adaptation framework (WS-UDA); Algorithm 2 Two-stage Training based Unsupervised Domain Adaptation (2ST-UDA) |
| Open Source Code | No | The paper does not provide any statement or link indicating that the source code for the described methodology is publicly available. |
| Open Datasets | Yes | Amazon review dataset (Blitzer, Dredze, and Pereira 2007); FDU-MTL dataset (implicitly referred to as commonly used and containing IMDb and MR datasets) |
| Dataset Splits | Yes | The remaining samples in the target domain are used as the validation set and test set, and the number of samples in the validation set is the same as (Chen et al. 2018).; each domain has a development set of 200 samples and a test set of 400 samples. |
| Hardware Specification | No | The paper does not explicitly describe any specific hardware used for running experiments (e.g., GPU models, CPU types, or memory). |
| Software Dependencies | No | The paper mentions using "MLP as our feature extractor" and optimizing "with the Adam" but does not provide specific version numbers for any software libraries, frameworks, or programming languages. |
| Experiment Setup | Yes | we set the batch size 8 and the learning rate 0.0001 for the sentiment classifier and the domain classifier. Besides, we perform early stopping on the validation set during the training process. |