Simultaneous Learning of Pivots and Representations for Cross-Domain Sentiment Classification
Authors: Liang Li, Weirui Ye, Mingsheng Long, Yateng Tang, Jin Xu, Jianmin Wang8220-8227
AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We experiment with real tasks of cross-domain sentiment classification over 20 domain pairs where our model outperforms prior arts. The experiments conducted on a number of different source and target domains show that our method achieves the best accuracy compared with a number of strong baselines. Table 2 reports the classification accuracies of different methods on the Amazon product reviews dataset. |
| Researcher Affiliation | Collaboration | 1School of Software, BNRist, Tsinghua University, China 1Research Center for Big Data, Tsinghua University, China 2Data Quality Team, Wechat, Tencent Inc., China |
| Pseudocode | Yes | Algorithm 1 Pseudocode for the first stage of TPT |
| Open Source Code | No | The paper does not provide any explicit statements about making the code open-source or include a link to a code repository. |
| Open Datasets | Yes | To facilitate direct comparison with previous work we experiment with the product review domains (Blitzer, Dredze, and Pereira 2007) of Books (B), DVDs (D), Electronic (E), Kitchen (K) and airline services reviews (A) (20 ordered domain pairs) |
| Dataset Splits | Yes | Firstly, we divide the whole data into training set and development set and maintain the size of the pivot set during training. There are 1000 positive and 1000 negative labeled reviews in each domain and the remaining reviews form the unlabeled set. |
| Hardware Specification | No | The paper does not specify any hardware details such as GPU models, CPU types, or cloud computing resources used for the experiments. |
| Software Dependencies | No | The paper mentions optimizers (RMSprop, Adam) and model architectures (Transformer, CNN, LSTM) but does not provide specific version numbers for any software libraries, frameworks (e.g., PyTorch, TensorFlow), or programming languages used. |
| Experiment Setup | Yes | The word embedding matrix We and position embedding matrix Wp are randomly initialized from a uniform distribution U[ 0.2, 0.2]. We use 50 Monte Carlo integration samples and keep fixing λ = 0.1 throughout all experiments. The CNN classifier takes the features from Transformer as input. We train TPT using RMSprop optimizer with learning rate set to 7e-4 and use Adam (Kingma and Ba 2014) for text convolutional network fine-tuning. |